As clever as robots are getting, one of the key things that separates them from humans is self-awareness. Debate rages over what exactly it means for something to be self-aware, whether robots could ever achieve it, and the ethical implications that it might dredge up. Now, researchers from Columbia Engineering have gone and done it, giving a robot arm some form of self-awareness, – at least in a rudimentary sense – which allows it to better adapt to changing conditions.

Most robots are designed for a specific job, and to let them perform it at their best humans program them with a "self-simulation" that essentially tells them enough about themselves to understand what movements they can possibly do, and when they should do them. This kind of "narrow-AI" works well under the circumstances that the robot is designed for, but if conditions change or the robot gets damaged, it doesn't necessarily know how to adapt.

Humans, on the other hand, spend their whole lives developing and adapting their own self-image. You know, generally, what your body is capable of – you understand that, for example, your elbows bend in one direction, and you know the range of movements that allows you to make. If those capabilities change due to injuries or just regular aging, it's not too hard for your brain to adjust.

The Columbia researchers wanted to try to give that kind of self-awareness to a robot. They started with an articulated robot arm that had four degrees of freedom, and rather than give it a pre-formed self-model, they let it build its own using deep learning techniques.

At first, the robot didn't know what shape it was, so it moved randomly to collect data about what it could and couldn't do and built a self-model from that Columbia Engineering

At first, the robot had absolutely no idea what shape it was and flailed around randomly. But those random movements were an important first step, allowing it to collect about 1,000 trajectories, each one containing 100 points. From there, the robot was able to use deep learning to build its own self-model, and readjust it as it explored. The team says that the first self-models were way off, but after being trained for about 35 hours, it was accurate to within 4 cm (1.6 in) of the real robot.

To test how well it had learned, the team then had the robot arm perform a basic pick-and-place task, using its newly-formed self-model as a guide. By allowing the robot to recalibrate its position each step of the way, it was able to pick up a set of objects and put them in a container with 100 percent success. When the team changed it to depend entirely on its self-model and no external feedback, its success rate dropped to 44 percent. That doesn't sound great, but the team says this is still impressive, comparing it to a human doing the job with their eyes closed.

The next step was to test how well the robot could adapt to new circumstances. The researchers 3D printed and installed a new part that changed the shape of the arm. Sure enough, it didn't take too long for the robot to realize that its self-model was out of date, and it was able to update itself and continue its work.

As part of its tests, the team 3D printed a new part (pink) that changed the arm's shape, and it was able to adjust its internal model of itself and get back to work Columbia Engineering

Although it's a far cry from the ability of humans and other animals, the researchers say this is could be similar to how human infants learn early motor skills. After all, babies are still developing self-awareness, and those random kicks and jerks are them figuring out how their own bodies work.

"This is perhaps what a newborn child does in its crib, as it learns what it is," says Hod Lipson, co-author of the study. "We conjecture that this advantage may have also been the evolutionary origin of self-awareness in humans. While our robot's ability to imagine itself is still crude compared to humans, we believe that this ability is on the path to machine self-awareness."

As useful as self-aware robots could be, they do raise new concerns and new questions. How do we control them when things get out of hand? When can something be called conscious? And if they reach that level, what rights might we need to give these beings?

"Self-awareness will lead to more resilient and adaptive systems, but also implies some loss of control," says Lipton. "It's a powerful technology, but it should be handled with care."

The research was published in the journal Science Robotics, and the team describes the work in the video below.

Source: Columbia Engineering