Researchers at Google are trying to help robots navigate the world as well as humans can, and that means a lot of practice. Research scientist Sergey Levine and his team took fourteen robotic arms, networked them together, and used convolutional neural networks to let these robots learn on their own how to pick up small objects like a cup, tape dispenser, and neon-green toy dolphin.

Most one-year old babies can figure out how to grasp and pick up small objects, but the same task is incredibly difficult for machines to learn. A robotic arm, which is usually programmed to recognize an object and react to it in a certain pre-programmed way, can't easily respond to changes in the environment the same way a human being can. This is a good approach for tasks that happen in predictable places and with objects that the machine has been taught to engage with. But can robots be trained to pick up objects they've never seen before?

To find out, researchers had the robotic arms lunge at boxes full of objects randomly, picking one up occasionally by dumb luck. At the end of every day, the researchers took the data collected during the robots' attempts and used it train neural networks to better anticipate the outcome of a grasp. Over the course of 800,000 grasp attempts, the networked arms were able to start self-correcting their actions. Soon enough they were picking up objects with much more frequency, even employing what might be observed as strategies—such as pushing an object away in order to grasp another and developing different techniques to pick up soft versus hard objects.

All of this took place without the researchers programming the system how to pick these objects up. Using a feedback loop, they were able to reduce the rate of failed attempts to pick up an object down to 18-percent. The researchers plan to expand their research to a wider variety of grasping strategies, and later try the approach in a variety of environments and real-world scenarios outside of the lab.