The robot is connected to a 3D sensor and a deep-learning neural network to which researchers fed images of objects. They included information about the objects' shapes, visual appearances and the physics of how to go about grabbing them. So, when a new object is placed in front of the robot, it just has to match it to a similar object in the database.

In practice, when the robot was more than 50 percent confident that it could grab a new object, the robot was able to grip and not drop it 98 percent of the time. If it was less than 50 percent confident, the robot would give the object a poke and then decide on a gripping strategy. In those cases, the robot was successful 99 percent of the time. A quick little inspection is all it needs to overcome a lack of confidence.

This method of robot training can shave a lot of time off from machine-learning processes and can produce robots with greater dexterity. "We can generate sufficient training data for deep neural networks in a day or so instead of running months of physical trials on a real robot," Jeff Mahler, a postdoctoral researcher working on the project, told MIT Technology Review. The robots currently used in factories are very precise and accurate with known objects but can't adjust well when faced with new ones. The efficiency of this training strategy and the reliability of the robot's grip sets this method up nicely for commercial use in the future.