Inspired by human learning, a team of UC Berkeley researchers have developed a technique that allows robots to perform tasks through trial and error without being given specific details about their surroundings.

To demonstrate the learning technique, the researchers told a robot to complete various tasks — assembling a toy plane, screwing a cap on a water bottle, putting a clothes hanger on a rack — without giving the robot information about its environment or instructions on how to perform each stage of the task.

“The way to enable robots to deal with unstructured environments is to equip them with the ability to learn,” said Pieter Abbeel, an associate professor in the campus’s electrical engineering and computer sciences department, who led the team of researchers. “Think about it — programming every possible scenario would take forever.”

Abbeel and his team — Trevor Darrell, director of the Berkeley Vision and Learning Center; postdoctoral researcher Sergey Levine; and doctoral student Chelsea Finn — programmed the robot to learn through trial and error by giving it real-time scores for each attempt until it mastered the skill.

By defining a goal numerically and giving the robot scores, “the program tells the robot how well it’s doing by defining what the task is rather than how,” Levine said. “The robot has to figure out how to do the task on its own.”

The technique also utilized a branch of artificial intelligence known as “deep learning,” which allowed the robot to take in images and output motor commands. Through deep learning, the robot can automatically acquire a multitude of skills, eliminating the need for roboticists to program each one separately.

“In order for robots to succeed in the real world, they need large repertoires of behaviors,” Levine said.

While human learning is currently not very well understood on a detailed level, Abbeel said, its flexibility was a “big inspiration” for his team’s work. The team began working on this project last October, but it builds on previous work concerning motor skills, he added.

The next step, Abbeel said, is to program the robot to recognize its own success and failure by giving itself scores rather than having the researchers program it to do so. Eventually, he said, he hopes to make robots smart enough to learn skills for many different applications, including cooking meals and providing disaster relief.

The research is part of an initiative launched in April at the UC system’s Center for Information Technology Research in the Interest of Society.

“We really want to bring together faculty (across campuses) and … collaborate and get things done together,” Abbeel said. “What are we after is to get robots to do good things for people — that’s our inspiration.”

The team, with the exception of Darrell, will present its latest findings in Seattle on Thursday at the IEEE International Conference on Robotics and Automation.

Amy Jiang is a news editor. Contact her at [email protected] and follow her on Twitter @ajiang_dc.