The robot apocalypse is nigh. Boston Dynamics’ robots are doing backflips and opening doors for their friends. Oh, and these 7-foot-long robot arms can lift 500 pounds each, which means they could theoretically crush, like, six humans at once.

The robot apocalypse is also laughable. Watch a robot attempt a task it hasn’t been explicitly trained to do, and it’ll fall flat on its face or just give up and catch on fire. And teaching a robot to do something new is exhausting, requiring line after line of code and joystick tutorials in say, picking up an apple.

Chelsea Finn/UC Berkeley

But new research out of UC Berkeley is making learning way easier on both the human and machine: By drawing on prior experience, a humanoid-ish robot called PR2 can watch a human pick up an apple and drop it in a bowl, then do the same itself in one try, even if it’s never seen an apple before. It’s not the most complex of tasks, but it’s a big step toward making machines rapidly adapt to our needs, fruit-related or otherwise.

Consider the toothbrush. You know how to brush your teeth because your parents showed you how—put water and paste on the bristles and put the thing in your mouth and scrub and then spit. You could then draw on that experience to learn how to floss. You know where your teeth are, and you know there’s gaps between them, and that you have to use an instrument to clean them. Same principle, but kinda different.

To teach a traditional robot to brush its teeth and floss, you’d have to program two sets of distinct commands—it can’t use the context of prior experience like we can. “A lot of machine learning systems have focused on learning completely from scratch,” says Chelsea Finn, a machine learning researcher at UC Berkeley. “While that is very valuable, that means we don't bake in any knowledge. Essentially, these systems are starting with a blank mind every time they learn every single task if they want to learn.”

Finn’s system instead provides the humanoid-ish robot with valuable experience. “We collected videos of humans doing a number of different tasks,” she says. “We collected demonstrations of robots doing the same tasks via teleoperation, and we trained it such that after it sees a video of a human doing one thing, the robot can learn to imitate that thing as well.”