“I want to meet, in my lifetime, an alien species,” said Hod Lipson, a roboticist who runs the Creative Machines Lab at Columbia University. “I want to meet something that is intelligent and not human.” But instead of waiting for such beings to arrive, Lipson wants to build them himself — in the form of self-aware machines.

To that end, Lipson openly confronts a slippery concept — consciousness — that often feels verboten among his colleagues. “We used to refer to consciousness as ‘the C-word’ in robotics and AI circles, because we’re not allowed to touch that topic,” he said. “It’s too fluffy, nobody knows what it means, and we’re serious people so we’re not going to do that. But as far as I’m concerned, it’s almost one of the big unanswered questions, on par with origin of life and origin of the universe. What is sentience, creativity? What are emotions? We want to understand what it means to be human, but we also want to understand what it takes to create these things artificially. It’s time to address these questions head-on and not be shy about it.”

One of the basic building blocks of sentience or self-awareness, according to Lipson, is “self-simulation”: building up an internal representation of one’s body and how it moves in physical space, and then using that model to guide behavior. Lipson investigated artificial self-simulation as early as 2006, with a starfish-shaped robot that used evolutionary algorithms (and a few pre-loaded “hints about physics”) to teach itself how to flop forward on a tabletop. But the rise of modern artificial intelligence technology in 2012 (including convolutional neural networks and deep learning) “brought new wind into this whole research area,” he said.

In early 2019, Lipson’s lab revealed a robot arm that uses deep learning to generate its own internal self-model completely from scratch — in a process that Lipson describes as “not unlike a babbling baby observing its hands.” The robot’s self-model lets it accurately execute two different tasks — picking up and placing small balls into a cup, and writing letters with a marker — without requiring specific training for either one. Furthermore, when the researchers simulated damage to the robot’s body by adding a deformed component, the robot detected the change, updated its self-model accordingly, and was able to resume its tasks.

It’s a far cry from robots that think deep thoughts. But Lipson asserts that the difference is merely one of degree. “When you talk about self-awareness, people think the robot is going to suddenly wake up and say, ‘Hello, why am I here?’” Lipson said. “But self-awareness is not a black-and-white thing. It starts from very trivial things like, ‘Where is my hand going to move?’ It’s the same question, just on a shorter time horizon.”

Quanta spoke with Lipson about how to define self-awareness in robots, why it matters, and where it could lead. The interview has been condensed and edited for clarity.

‌You’re clearly interested in big questions about the nature of consciousness — but why are you investigating them through robotics? Why aren’t you a philosopher or a neuroscientist?

To me the nice thing about robotics is that it forces you to translate your understanding into an algorithm and into a mechanism. You can’t beat around the bush, you can’t use empty words, you can’t say things like “canvas of reality” that mean different things to different people, because they’re too vague to translate into a machine. Robotics forces you to be concrete.

I want to build one of these things. I don’t want to just talk about it. The philosophers, with all due respect, have not made a lot of progress on this for a thousand years. Not for lack of interest, not for lack of smart people — it’s just too hard to approach it from the top down. Neuroscientists have approached this in a more quantitative way. Still, I think they’re also hampered by the fact that they’re taking a top-down approach.

If you want to understand consciousness, why start with the most complex conscious being — that is, a human? It’s like starting uphill, the most difficult way to start. Let’s try to look at simpler systems that are potentially easier to understand. That’s what we’re trying to do: We looked at something very trivial, [a robot] that has four degrees of freedom, and asked, “Can we make this thing self-simulate?”

Are self-simulation and self-awareness the same thing?

A system that can simulate itself is to some degree self-aware. And the degree to which it can simulate itself — the fidelity of that simulation, the short-term or long-term time horizon it can simulate itself within — all these different things factor into how much it is self-aware. That’s the basic hypothesis.

So you’re reducing a term like “self-awareness” to a more technical definition about self-simulation — the ability to build a virtual model of your own body in space.

Yes, we have a different definition that we use that is very concrete. It’s mathematical, you can measure it, you can quantify it, you can compute the error to what degree. Philosophers might say, “Well, that’s not how we see self-awareness.” Then the discussion usually becomes very vague. You can argue that our definition is not really self-awareness. But we have something that’s very grounded and easy to quantify, because we have a benchmark. The benchmark is the traditional, hand-coded self-model that an engineer gives to a robot. With our robot, we wanted to see if an AI algorithm can learn a self-model that’s equal or better than what that traditional, coded-by-hand model can do.