Technically, there’s no known reason why a conscious machine couldn’t be indistinguishable from a human.

Consciousness is, in its most basic formulation, the state of being aware of one’s own existence and surroundings. Though it’s possible to be alive and unconscious (in a coma, for example), most would say that consciousness is the essence of living and being. The idea of creating consciousness from scratch, of building a robot with conscious thought, is tantalizing—both for the chance to definitively prove how consciousness works, and as a means of possibly creating sophisticated sentient being. At their most complex, conscious robots could theoretically be lab-made humans, capable of loving, striving, and working in any role currently filled by people. That prospect is so far off that it’s not yet a serious industry goal. Instead, it’s a scientific quest.

Around the world, several scientists and philosophers have started on the path to building conscious robots. The journey is in its early stages, and for one key reason: We still don’t know exactly what consciousness is.

Despite serious scientific research, there’s widespread disagreement, with each group insisting that the others have it totally and utterly wrong.

At least, there’s no consensus. While we’ve figured out the origins of the universe and what’s inside an atom, we still don’t know, for sure, how we come to be conscious beings and what role, precisely, consciousness plays in what we humans can do.

There are ideas, of course. Some argue that there is no mysterious theory of consciousness, that our mind is nothing more than the nuts and bolts of neurons firing in the brain. Then there are a handful of theories, such as Integrated Information Theory or Global Workspace theory, that various groups of scientists champion, insisting they’ve found the answer to the question of consciousness—and the others have it utterly wrong.

Building a conscious robot puts these theories to the ultimate test. Two scientists I spoke to were convinced they had the correct definition of consciousness and, as a result, would have a prototype within the next three years. Others disparaged these claims, insisting it would take decades and that those scientists were working with the wrong theories anyway.

In Japan, Ryota Kanai is using Integrated Information Theory as his guide for building a conscious robot. Kanai, a cognitive neuroscientist at University of Sussex and University College London before he founded his start up Araya, has received $3 million in funding so far and hopes to get $10 million more in the coming years.

He believes that to build a conscious robot, you first need to program it with a model of the world so that it can recognize changes in the environment (in other words, it can perceive) and secondly create a program that links action with sensation, which will give the robot an internal representation of itself. Thirdly, he wants the robot to be able to generate hypotheses about the world, which he believes will essentially act like imagination. Once the robot can internally simulate the outcome of its possible actions without actually enacting them, Kanai believes it will be able to make conscious choices about its actions.

This robot will have a level of consciousness closer to an ant than a human, but Kanai says this is still the most advanced conscious robot design work in the scientific community. “Most people are still working on the theoretical part,” he says.

Paul Verschure, research professor at the Catalan Institute of Advanced Research in Catalonia says he feels sorry for Kanai. “It’s sad when people waste their time,” he says.

Others are not so sure. Paul Verschure, research professor at the Catalan Institute of Advanced Research in Catalonia says he feels sorry for Kanai. “It’s sad when people waste their time,” he says.

Verschure believes Integrated Information Theory has “nothing to say about consciousness,” and has developed his own theory of consciousness that emphasizes the role of social interaction. If a robot can navigate the social world and recognize others as external agents then, he believes, it will be conscious.

“To build a social robot that can effectively deal with other agents, it has to have all the ingredients of consciousness,” says Verschure. To be a conscious being, he believes, one must be able to interpret others’ behavior. The data we get from interacting with others is fairly limited, so we make assumptions about their thoughts and wishes. In other words, we have internal simulations of other people. “Consciousness is a specific memory system that allows you to come up with a unitary description where you say ‘me, agent, interact with other agents in the world,’” says Verschure.

He’s now designing his robot to be as social as humans, and to talk and interact with us—and expects to have a prototype within two to three years.

But, again, there are doubters. Lisa Miracchi, philosophy professor at the University of Pennsylvania, says we lack the scientific framework—namely a clear theory of consciousness—to effectively build conscious robots in the coming years. Miracchi thinks it’ll take decades, and says without obvious, short-term applications for conscious robots, there’s not much industry interest. Robotics tends to be industry-driven, she says, and so “a lot of the really smart work goes into making applications or things that are useful for humans, like self-driving cars.”

Though success is far off and the end result not obviously practical, Miracchi still believes building conscious robots is a goal worth pursuing. She’s working with Penn engineering professor Daniel Koditschek on creating robots with agency—namely the ability to have a goal and act towards this goal. Consciousness, Miracchi believes, is a necessary byproduct of agency.

“Genuine agency entails consciousness,” she says. “I think agency is a more tangible way of approaching the issue.” Currently, her philosophy is focused on agency involved in extremely basic actions, such as eating food (or, for a robot, consuming fuel), and is considering what it means to act with a goal in such cases. There’s a long way to go, but, she says, “I think we could build robots that are genuine agents.”

“But the most destructive amoral machines on this planet are humans. Building these machines will help me to understand human morality and how we can hopefully educate it.”

If scientists do manage to build conscious robots, it’ll definitively answer the mystery of consciousness while raising new questions on what it means to be human.

Verschure says he’s acutely aware of the “moral structure” of the machines he builds. He readily acknowledges that conscious robots have the potential to be dangerous, but also says building consciousness from scratch will help him decode human psyche.

“The most destructive amoral machines on this planet are humans,” he says. “Building these machines will help me to understand human morality and how we can hopefully educate it.”

Even if scientists do manage the incredible step of building a conscious robot, it will likely be far closer to Kanai’s ambition of a building a robot with the consciousness of an ant. After all, humans don’t just have basic consciousness. We are, despite and sometimes because of our flaws, incredibly sophisticated beings.

“How do you imbue these artificial systems with wants and regrets and desires?,” says cognitive psychologist Axel Cleeremans, from the Université Libre de Bruxelles, “How do you make a robot experience an orgasm? We’re back to the basics of life and death and biology.”