Humans understand feedback and they seek improvement over numerous trials and errors. Robots, on the other hand, don’t. Robots only understand what’s hardcoded in their algorithms, essentially making them useless without getting direct input from humans. But recent advances are offering a major shift to that.

Meet BRETT, or Berkeley Robot for the Elimination of Tedious Tasks, which is capable of figuring many of the things out on its own without any assistance from a human. The robot is able to learn motor tasks through trial and error using a process which is quite similar to how we humans learn.

A product of Berkeley Robot Learning Lab, BRETT demonstrates this technique by putting a clothes hanger on a rack, assembling a LEGO toy plane, screwing a cap on a water bottle -- and similar things that provide a challenging task for artificial challenge.

"What we’re reporting on here is a new approach to empowering a robot to learn," said Professor Pieter Abbeel of UC Berkeley’s Department of Electrical Engineering and Computer Sciences. "The key is that when a robot is faced with something new, we won’t have to reprogram it. The exact same software, which encodes how the robot can learn, was used to allow the robot to learn all the different tasks we gave it".

BRETT utilizes something known as deep learning, which is loosely inspired by the neural circuitry of the human brain when it perceives and interacts with the world. As the University of California describes it, deep learning programs create "neutral nets" in which "layers of artificial neurons process overlapping raw sensory data, whether it be sound waves or image pixels. This helps the robot recognize patterns and categories among the data it is receiving".

Funded by the Defense Advanced Research Projects Agency (DARPA), Office of Naval Research, US Army Research Laboratory and National Science Foundation, the latest developments in BRETT will be presented on Thursday, May 28 in Seattle.