By Deepak Chopra, MD

Various scientific fields over the course of history have hoped to master nature for the benefit of humankind. At the top of the heap right now is artificial intelligence (AI), which has allied itself with the technology of robotics. Between them AI and robotics are having a sizable impact on the work force as more and more jobs get automated. Advocates of AI are both supremely optimistic and nervous. Both relate to the possibility of a super-intelligent machine that would far surpass human intelligence.

If you are an optimist, this so-called Singularity, as the hypothetical machine is called, would become self-improving. Its software would become free of human constraints, and in a “runaway reaction,” it would keep improving its knowledge and the technology that knowledge creates. The result would be a revolution in human civilization—or its demise. The worriers are nervous that the Singularity could initiate global war on its own, or perhaps turn on us as its inferior and deal us some other kind of fatal blow, for the good of life on Earth.

But these scenarios depend upon an unanswered question: are machines intelligent to begin with? Computers are essentially logic machines that process digital information. But in a recent paper entitled “The Emperor of Strong AI Has No Clothes,” physicist Robert K. Logan in Toronto and Adriana Braga in Rio de Janeiro argue that the dream of a super intelligence has limits that its adherents choose to ignore. (“Strong” AI foresees a machine that is at least as smart and capable as the human mind.) the point that Logan and Braga make is fundamental: human intelligence is far from machine-like, and in addition, our illogical minds are our strength, not a weakness.

The things the Singularity will never get right amount to a long list, to quote the two researchers: “… curiosity, imagination, intuition, emotions, passion, desires, pleasure, aesthetics, joy, purpose, objectives, goals, telos, values, morality, experience, wisdom, judgment, and even humor.” A clever programmer can figure out how to get a computer to answer human questions like “How is your mother feeling?”, “What does chocolate taste like?”, and “Don’t you just love fresh snow?” But having no actual mind, much less a human mind, the machine will be faking it to come up with answers.

I wrote a book with Rudy Tanzi of Harvard Medical School, Super Brain, that touches upon the whole issue of how the brain isn’t the same as the mind. Our position runs counter to AI theorists but also neuroscientists, whose entire field is based on the simple equation Brain = Mind. It’s quite strange to believe that everything on the Logan-Braga list could be performed by any machine, including the brain, which neuroscience views as essentially a thinking machine made of cells. The confusion over this point relates to something even stranger about human life: we don’t understand our mind.

It seems simple enough that even a grade-schooler wouldn’t mistake the brain for the mind. If you ask a third-grader “What do you want for Christmas?” he would never answer “I haven’t made up my brain yet.” If one child falls in love with music while another falls in love with soccer, it’s clear that their brains didn’t make those choices. One obvious fault with computers is that they never pay attention to things they like. They have no attention in the human sense of “paying attention.” Being machines, computers are either switched on or off, while we humans occupy a spectrum of attention from total denial to daydreaming, being distracted, focusing in like a laser beam, and growing bored.

But if AI operates on the false assumption that machines can be intelligent, there’s an unseen cause for enormous optimism that comes from an unexpected direction. Let’s jump ahead to the day when robotics have taken over every job that a machine could perform and super-computers handle information far beyond the capacity of the human mind. The big question, it seems to me, is what people would decide to do next? Hordes of humanity, starting in the developed countries, would face a kind of perpetual mental vacation. This could lead to a lotus-eater’s life of dullness, perpetual distraction, and pointless pleasure-seeking.

But there’s another path. To the Logan-Braga list of what distinguishes human intelligence, I’d add “transcendence.” This is actually our unique gift. Given any situation, we are not bound by circumstances imposed on us but can look with fresh eyes, the eyes of self-awareness. To be self-aware is to transcend physical boundaries, including those imposed by a conditioned brain. It’s sadly true that many people live like biological robots, following the conditioning, or mental software, that turn them into non-thinkers. To be ruled by your mental software diminishes the mind’s potential to wake up, to be renewed, to see the world through fresh eyes, and to discover your true self.

The human potential movement has been active for several decades, and yet progress has been slowed for countless people by the practical demands of going to work, earning a living, and carrying out every day’s mundane duties and demands. If AI takes over those things, the obstacles to human potential would be radically lessened. This could amount to a leap in the evolution of consciousness. Such a leap is non-technological, or to put it another way, our future evolution depends on developing a technology of consciousness.

The riddle that has remained unsolved for centuries, “What is the mind?”, might become fascinating and compelling to people in their everyday lives. After all, it’s a question no less intriguing than “What is God?” Humanity has spent millennia pondering that question, and at the same time a much smaller band of sages, saints, artists, and savants has been confronting the intimate issues of the world “in here.” It would be ironic if the flaw in strong AI made us more human rather than less. Yet that could very well turn out to be what happens.