In The Matrix, Morpheus tells Neo that their digital appearance is based on their “residual self-image.” That is, the characters look how they imagine themselves to look, based on their own mental models of themselves.

In the real world, scientists have been trying to teach robots that trick as well. That’s because, unlike the warring machines of the matrix, a real-life robot with an accurate self-image might benefit humanity. It’d allow for faster programming and more accurate self-planning, and help a device self-diagnose when something’s gone wrong. It could even help a robot adapt to any damage it sustains.

And on Wednesday, a pair of scientists from Columbia University said they’d given a robotic arm this self-awareness, and, in the process, new potential to learn. Their research is published in the journal Science Robotics.

A Robot Arm’s Self-Image

The paper’s surprisingly readable, and its abstract reads, in its entirety: “A robot modeled itself without prior knowledge of physics or its shape and used the self-model to perform tasks and detect self-damage.” (Sounds like a Netflix movie description … I’d watch it!)

The researchers bought a standard model of robotic arm — the intimidatingly named WidowX — and taught it to visualize itself. They ran it through 1,000 random trajectories, and, basically, had it observe what happened: how certain movements felt, what was possible, what was inefficient, everything. The authors even compared it to a human first learning the capabilities of its own limbs, writing, “This step is not unlike a babbling baby observing its hands.”

Armed with all that data, the robot used deep learning to generate its own self-image, i.e., an accurate model of itself. It took a while, with the initial models generated being way off the mark, but after about 34 hours of training, the self-model was accurate to within 4 centimeters. That was good enough to allow it to become an expert at picking up and moving small balls around — a typical stand-in for robotic dexterity. The robot’s self-image was good enough that, without any further training, it could perform a totally different task: handwriting a word with a marker. (The robotic arm says “hi,” by the way.)

Bigger Robot Things

Then, to simulate a sudden injury or bit of damage, the researchers replaced the arm the robot had been using with one that was slightly longer and deformed. The machine quickly updated its self-image to account for the new situation, and was soon back to performing the same tasks with about the same level of accuracy.

Overall, the authors make a convincing case that getting robots to create accurate self-images may be the best way to create accurate, self-diagnosing and efficient machines. “Self-imaging will be key to allowing robots to move away from the confinements of so-called narrow AI toward more general abilities,” they write. Then they go a little bit further: “We conjecture that this separation of self and task may have also been the evolutionary origin of self-awareness in humans.”

It’s definitely cool and all, just as long as we stop making our machines too much like the ones in The Matrix.