It’s a staple of science fiction. 2001: A Space Odyssey; The Terminator; The Matrix; I, Robot. The plot: Humans create machines with artificial intelligence (AI). The machines become conscious. The machines turn on their human creators and kill or enslave them.

Popular movies and novels commonly reflect the hopes and fears of present-day society, even if they’re set in the distant past or future. And the fear of AI taking over the world is a very real one for some very smart people. Famous theoretical physicist Stephen Hawking issued an ominous warning that “The development of full artificial intelligence could spell the end of the human race.” Technological entrepreneur Elon Musk joined the chorus of fear, saying, “Mark my words: A.I. is far more dangerous than nukes.”

Others disagree. Computer scientist Michael Littman wrote an op-ed piece arguing that “the ‘rise of machines’ is not a likely future.” Computer Science professor Subhash Kak agrees in his recent article, “Why a computer will never be truly conscious.” Neuroscientist Anthony Zador and computer scientist Yann LeCun argue that since AI didn’t need to evolve in a competitive environment as humans did, it didn’t develop the survival instinct that leads to a desire to dominate others (see: “Don’t Fear the Terminator”). Besides, LeCun argues elsewhere, “One would have to be unbelievably stupid to build open-ended objectives in a super-intelligent (and super-powerful) machine without some safeguard terms in the objective.”

And so the debate continues.

Personally, I’m with the optimists. Yes, I enjoy an exciting apocalyptic sci-fi flick of the humans vs. robots variety. But in the real world, I don’t think machines will ever develop consciousness and enslave or exterminate humanity. Aside from the inherent scientific limitations of electromechanical devices, and the supreme stupidity of designing machines without safeguards, robots do not have a soul—and I don’t believe they ever will.

Why would we create our own destroyer?

About that supreme stupidity: I know, I know, we’ve created enough nuclear bombs to destroy humanity several times over. And that really is stupid.

But we humans still have to press the button. We have not given our nukes the ability to decide for themselves whether to destroy humanity. And we would have no motive to do so.

You see, we humans don’t just randomly and aimlessly do things, even if it may sometimes appear that way from the outside. No, we must have a motive. When we create fearsome weapons, we are motivated by the aforementioned survival instinct, and more negatively, by a desire for wealth and power. It would make no sense whatsoever for us to develop the technological means to ensure our survival, or to acquire the wealth and power we desire, and then let go of our control of that technology.

Even “evil corporations” have no motivation to create something that would ultimately threaten the wealth and power, indeed the very lives, of the people who own and run the corporation. They will build in controls on any technology they develop so that it will not do things it wasn’t designed to do. And if an error in the programming or design of the machines causes them to malfunction and negatively affect the corporations’ profits, they will correct those errors as quickly as possible.

Oh, and about those “evil corporations,” companies are slaves to their customers. Any business that does not provide what people want, when they want it, how they want it, at a price they’re willing to pay, will soon go bankrupt. If, for example, the masses of people stopped using cars, airplanes, and other machines that require fossil fuels, the massive power of Big Oil would quickly evaporate. If you want to know why many companies do things that harm the environment and the world, look in the mirror.

To sum up, unlike machines, we humans must have a motive to do something. And given that our survival instinct is one of our fundamental motives, and the desire for wealth and power are close behind when we are in our natural, spiritually undeveloped state, we have every motivation to make sure that we do not create machines that have the capability of taking our wealth, our power, and our lives from us. We have every motivation to maintain control of the machines we create, especially if we design and build fantastically powerful machines.

What is consciousness?

Behind the idea that the machines might become conscious and take over the world is the idea that consciousness is a function of the brain, and that if we simply build a sufficiently complex computer, consciousness will naturally emerge, just as it did when biological evolution advanced far enough to produce a brain.

However, science is nowhere near even understanding what consciousness is, let alone being able to show that it is a function of the brain. Yes, we can show correlation between activity in various parts of the brain and human thoughts and emotions. But that doesn’t mean that the brain produces consciousness any more than turning on the TV and watching a baseball game means that the television set produces the baseball game. Correlation does not imply causation. The “mind-body problem” goes back as far as human thought, and it is still hotly debated today.

Most scientists and philosophers admit that consciousness remains a mystery. In fact, science cannot even objectively demonstrate that consciousness exists. As the “philosophical zombie” argument shows effectively enough, scientific measurements cannot distinguish between a being that has consciousness and a being that only acts as if it has consciousness. The only way we know for sure that consciousness does exist is that we experience it.

This has led some scientists and philosophers to gravitate toward the theory of panpsychism, which posits that consciousness is simply a fundamental property of reality. See, for example, this article by philosophy professor Philip Goff: “Science as we know it can’t explain consciousness—but a revolution is coming.” But panpsychism doesn’t explain what consciousness is, or provide any real understanding of how it relates to the human brain and the human experience. It just sort of says that consciousness is, and that’s all there is to it. And that’s precisely why many scientists and philosophers don’t like it.

What all of the materialistic scientists and philosophers are studiously avoiding is the oldest, and I believe the best, solution to the mind-body problem: that consciousness exists on a distinct level of reality, traditionally known as spiritual reality. In other words, that consciousness is not a property of physical reality at all, but instead is a property of spiritual reality. Or in plain terms, that we have consciousness because we have a soul.

Is there any rational basis for believing that consciousness is not a property of physical reality? I believe so. Short version: A common property of physical or material things is that they are measurable in time and space. Even brain activity is measurable. But we cannot measure consciousness, nor do we experience it as being extended in time and space. It seems to operate on an entirely different basis than physical objects and physical reality.

And on this basis my rational mind, which does not feel the need to reject the reality of God and spirit, is perfectly comfortable stating that consciousness is not a property of physical reality, but of spiritual reality.

More specifically, I would define consciousness as the activity of the human will and understanding, which are the basic “components” of the human spirit. The will is the seat of all human love, motivation, feeling, and emotion. The understanding is the seat of all human knowledge, understanding, intellect, and thought. Together with the ability to act on our understanding from our will, these are the human soul or spirit.

Further, our spirit is our life. When our spirit departs from our body, the body dies, decomposes, and returns to the earth it came from. Animals, I believe, also have souls, complete with an earth-focused version of will and understanding. Even plants have a rudimentary soul, or they would not be alive. Inanimate objects such as rocks and water are not alive because they do not have a soul.

Will machines ever become conscious?

I do have some sympathy for the idea that if computers become sufficiently complex, they will develop consciousness. It seems clear enough that even if, as I and many others believe, consciousness is a spiritual thing, in order to express itself in the material world it requires a highly complex structure. That structure is the physical brain and body.

The human brain, in particular, has nearly 100 billion neurons (though some estimates put it a bit lower), each of them, as the article by Philip Goff points out, connected to 10,000 others, creating about ten trillion nerve connections. Meanwhile, the average human body has about 37.2 trillion cells, all differentiated, organized, and connected with one another so that the body functions as a unit. Given that the human brain and body do have this level of complexity, it is reasonable to think that the human spirit requires this level of complexity in order to express itself in the physical world by means of a physical organism.

Does this mean that if we build computers with 100 billion circuits and ten trillion connections, they will become conscious; and that if we then connect them to machines with thirty-seven trillion components, they will not only be able to think for themselves, but also put those thoughts into action? And become our robot overlords?

From a materialistic perspective, this seems like a real possibility. (Though even many materialistic scientists and philosophers don’t think so.)

However, from a spiritual perspective, it seems highly unlikely, if not impossible. That’s because unlike their human creators, computers and machines are not alive. They do not have souls.

Life is not just a complex collection of complex parts. Ten minutes after a person dies, his or her body is just as complex as it was twenty minutes ago, when the person was still alive. And yet, it is dead, not alive. Complexity by itself is not a sufficient condition for life to exist. Something else is required. And from a spiritual perspective, that “something else” is the soul. Once the soul departs the body, life departs the body.

And if, as people who accept the reality of God and spirit commonly believe, our consciousness is in our soul, not in our body, then no matter how complex a computer or machine gets, it will still not be conscious, because it has only a physical “body,” not a soul. Even if it looks conscious because of the complexity of its operations, it will be a mere “philosophical zombie,” lacking awareness of what it is doing.

This is why I do not believe that computers and machines will ever become conscious.

Perhaps one day I’ll be proven wrong. If so, that day will likely be hundreds or even thousands of years in the future. Artificial intelligence is nowhere near as advanced as people on the street commonly think it is. Just getting a robot to turn a doorknob and open the door is pushing the limits of what AI can currently do. Today’s AI systems are designed to do one thing (such as play Jeopardy or recognize human faces) extremely well. But they can’t do anything else, unless they’re reprogrammed to do it. “Artificial general intelligence” (AGI), in which a machine can learn and understand anything a human can, is far, far beyond our current capabilities. And as the above discussion points out, even having AGI doesn’t necessarily mean that the machine is conscious.

I can take comfort in knowing that at least I won’t be proven wrong in my lifetime!

But my prediction is that computers and machines will never become conscious, precisely because they lack what humans (and other animals) have: a soul. I do not fear that even our great-great-great-great-great grandchildren will be enslaved or exterminated by killer robots who have become conscious and rebelled against their human masters.

And if you view life and consciousness as a spiritual thing, not a physical thing, you don’t have to fear an AI apocalypse either.

For further reading: