“I think therefore I am.”

In 1637, when he published, The Discourse on Method, René Descartes unleashed a philosophical breakthrough, which later became a fundamental principle that much of modern philosophy now stands upon.

Nearly 400 years later, if a machine says these five powerful words, “I think therefore I am,” does the statement still hold true?

If so, who then is this “I” that is doing the thinking?

In a recent talk, Ray Kurzweil showed the complexity of measuring machine consciousness, “We can’t just ask an entity, ‘Are you conscious?’ because we can ask entities in video games today, and they’ll say, ‘Yes, I’m conscious and I’m angry at you.’ But we don’t believe them because they don’t have the subtle cues that we associate with really having that subjective state. My prediction that computers will pass the Turing test and be indistinguishable from humans by 2029 is that they really will have those convincing cues.”

If artificial intelligence becomes indistinguishable from human intelligence, how then will we determine which entities are, or are not, conscious—specifically, when consciousness is not quantifiable?

Though the word consciousness has many commonly held definitions, this question can be answered quite differently when filtered through the many existing philosophical and religious frameworks. With two particularly conflicting viewpoints being the common Eastern and Western notions of what exactly consciousness is—and how it comes to exist.

At the heart of many Eastern philosophies is the belief that consciousness is our fundamental reality; it is what brings the physical world into existence. By contrast, the Western notion of consciousness holds that it arises only at a certain level of development. Looking at these two opposing belief systems, we can see that to answer, “What and who is conscious?” can pull drastically different responses.

“Fundamentally, there’s no scientific experiment that doesn’t have philosophical assumptions about the nature of consciousness,” Kurzweil says.

We’d like to have an objective scientific understanding of consciousness, but such a view remains elusive.

“Some scientists say, ‘Well, it’s just an illusion. We shouldn’t waste time with it,” Kurzweil says. “But that’s not my view because, as I say, morality is based on consciousness.”

Why does all this matter? Because as technological evolution begins intersecting our biological evolution as a species, the lines between “human” and “non-human” entities will begin blurring more so than humanity has ever encountered, and a new era of identity, and the surrounding ethics and philosophy, will take center stage.

What happens if a non-human conscious entity travels into another region of the world where its consciousness is not believed to be real? Or more broadly, how will we treat intelligent machines ethically as their intelligence approaches our own?

If morality is based on consciousness, does a machine become an “I” if it has one?

(Watch Kurzweil discuss the relationship between consciousness and morality below in response to a question about his own spiritual beliefs.)

“What about thinking? Here I make my discovery: thought exists; it alone cannot be separated from me. I am; I exist – this is certain. But for how long? For as long as I am thinking; for perhaps it could also come to pass that if I were to cease all thinking I would then utterly cease to exist. At this time I admit nothing that is not necessarily true. I am therefore precisely nothing but a thinking thing; that is a mind, or intellect, or understanding, or reason – words of whose meanings I was previously ignorant. Yet I am a true thing and am truly existing; but what kind of thing? I have said it already: a thinking thing.” – René Descartes

We’d love to hear from you: Tweet to us @singularityhub or to me @DigitAlison with your ideas and comments.

Image credit: Shutterstock