Toddlers often teeter rather precariously, and adorably, on the boundary between falling over and staying upright. Darwin is no different -- except that it's a robot.

Darwin is a humanoid android, built to understand and demonstrate the numerous ways in which machines can learn to navigate challenging and unfamiliar environments. It learns to perform tasks in the same way that children do -- by imagining them first.


The group responsible for Darwin, at the University of California at Berkeley, hope that Darwin will allow robots to learn more naturally, and avoid extensive periods of testing to which robots are currently subjected.

When it's put into an unfamiliar position, MIT Technology Review reports -- such as in a new pose, on the floor -- Darwin's neural networks work on their own to find a solution. The robot itself is controlled by these networks, which are algorithms that mimic the way learning happens in humans. Connections between these simulated neurons, as in humans, strengthen and weaken in response to stimulus. This complex network is known as a deep-learning network.

Trossen Robotics

Darwin has already learned how to stand independently, move its hands and stay upright when the ground beneath it tilts, the team reports. The next step is to take that principle and apply it to other forms of movement and tasks. "The research direction is very exciting," said Dieter Fox, part of the research team. "The problem is always if you want to act in the real world. Models are imperfect. Where machine learning, and especially deep learning comes in, is learning from the real-world interactions of the system".