Leveraging Big Data for Grasping

We propose a new large-scale database containing grasps that are applied to a large set of objects from numerous categories. These grasps are generated in simulation and are annotated with the standard epsilon-metric and a new physics-metric. We use a descriptive and efficient representation of the local object shape at which the grasp is applied. Each grasp is annotated with the proposed metrics and representation.



We use crowdsourcing to analyze the correlation of the two metrics with grasp success as predicted by humans. The results confirm that the proposed physics-metric is a more consistent predictor for grasp success than the epsilon-metric. Furthermore it supports the hypothesis that human labels are not required for good ground truth grasp data. Instead the physics-metric can be used for simulation data.