Robots are learning how to complete tasks in sped-up virtual worlds, developing skills in a matter of hours that might otherwise take months. Simulated deep reinforcement learning (or Deep RL) means a skill that would normally take 55 days for an A.I. to learn in the real world takes only a day in the hyper-accelerated classroom.

“It’s got the potential to really revolutionize what we can do in the robotics domain,” Raia Hadsell, a research scientist with Google DeepMind, said at the Re-Work Deep Learning Summit in London on Thursday. “We can learn human level skills.”

It may sound counter-intuitive, as surely the whole point of robots is programmers can teach them to do things, right? When designing a machine that operates in the real world, though, robots need a whole lot of data to understand how to do a task in an unfamiliar situation. A.I. can use this data to “learn” a skill based on all the instances that came before.

Deep reinforcement learning collects that data in a similar way to how humans learn: a robot will complete a task repeatedly, like catching a ball, and record the data to build up a picture of how best to catch a ball in a new situation. When DeepMind used the model in 2013 to teach a robot how to master Atari games, simply by sitting it down in front of the screen and telling it the end goal, the scientific community loved it.

The problem is, this takes forever. You need to throw balls at a robot repeatedly, or in the Atari case, leave the robot alone in its bedroom for a while. Running a MuJoCo simulation, combined with a progressive neural network, trainers can run a program that mimics the robot, transfers the learned behaviors to the robot and maps the virtual movements into the real world.

“We can run those simulators all day and all night,” Hadsell said.

The results speak for themselves. This robot, who got its diploma in catching, can now follow virtual balls as if they were real, priming it for the big day when it’s asked to catch a real ball: