On AI, Buddha-Nature, and the Hard Problem of Consciousness by Adam Braus Founder of Coride.com

In Ray Kurzweil’s book “How to Create aMind” he spends a great deal of time reviewing the standards for an Artificial Intelligence and exploring at what point (if ever) an AI will be equivalent (or possibly exceed) human intelligence. One standard he explores comes from a term coined by the neuroscientist David Chalmers — what he calls “the hard problem of consciousness.” This phrase stands for the philosophical question: “What is the nature of phenomenal experience?” or “How do we have subjective experience and what is it like?” Kurzweil discusses how the “hard problem of consciousness” as a test for true AI — could a super computer have subjective experience? Would that make that machine equal to a human in being in the world and thinking? After meditating on this problem for a few months, an interesting conclusion unfolded for me that I would like to share.

When we reach insoluble or “hard” problems at the extremities of a science or theory, it is good practice to reassess the first principles of that system. At the axiomatic base of scientific theories of consciousness we find Kant’s theory of Transcendental Apperception. For Kant consciousness was king. And he considered consciousness to be a higher faculty by which disparate elements of lower experience are unified into a single grand-poo-bah conscious experience. He called this unifying faculty Transcendental Apperception. However, psychology and modern neuroscience shows that rather than consciousness being higher than other forms of experience, actually consciousness is a veneer that hides a complex competition of independent processing centers of neurological activity. The left brain sees and computes the world one way, the right brain another, the hippocampus and the amygdala a third, the cerebellum a fourth, etc and even nerves in the gut and chest a fifth and sixth. These various centers all compete for supremacy while the consciousness only takes note of a weirdly blended subset of experience. So why put some of these processing centers above others? Why not treat them all equally?

Let’s do a thought experiment together.

Sit for a moment quietly as you read this. Become aware of what you are seeing and perceive everything coming into your entire visual field. Simultaneously open your ears and notice any and all sounds. Taste the taste in your mouth. And smell the air as you breath normally. Finally, feel all over your body all the various textures and temperatures. These are your senses. Now, read this paragraph through and then close your eyes. Ignore your other senses and just pay attention to feelings, thoughts, and urges. Do you want to stop doing this little experiment? That is an urge. What’s next? Wondering about how AI works? That is a thought. Hurting from an argument you got into with a friend? That is a feeling. Close your eyes and pay attention only to your feelings, thoughts, and urges.

What is the difference between part 1 and part 2, between the 5 external senses and the 3 “internal senses” — feeling, thinking, and urges? Are they actually the same? Why or why not?

Someone might say “They are not the same because I’m not in control of my external senses, but I am in control of what I decided to think about, feel, and what choices I make.” Are you really in control of your thoughts, feelings, and decisions? My experience of my feelings, thoughts, and decisions, rarely feels like a choice between options, and instead seems like an “unpredictable one-thing-after-another-ness.” Even if we do ‘choose to feel or think something’ the plan to do so has to be an urge that arises conditioned by something we thought or felt before. For example, I might think, “if I eat healthy I’ll have a better mood, I’ll plan to eat healthy,” and thus the urge arises. Another reason it is difficult to consider the external and internal senses equally since there are no special external organs, like the nose or tongue or eye or skin for these three internal senses; however, there are specialized internal organs in the brain that make these three senses possible: the hippocampus, the amygdala, and the neocortex among others. Finally, what is likely the most difficult thing about considering the external and and internal senses as the same is that our social and personal identities are tied up and dependent on our feelings, thoughts, and choices but not on our senses. We consider our identities with how we feel, what choices we’ve made, and what we think, but not with what we see or taste. It is not very comfortable to consider our feelings to be categorically the same as looking down a street, smelling smoke, or tasting a hamburger.

I suggest that we overcome these obstacles and entertain that there is no difference between our 5 external senses and our 3 internal senses. Feelings, thoughts, and decision making are just three more senses on par with seeing, hearing, tasting, touching, and smelling. Each of these internal senses has organs that make them work. Moreover, the same way that we have to maneuver with the 5 external senses to stay upright when we sit or stand, distinguish palatable from bad food, not stare at the sun, etc; likewise, we must cope with our feelings, learn and think about concepts and stories, and make good decisions.

So, so what? What does this have to do with AI? What about the hard problem of consciousness? If feeling, thoughts, and urges are categorically the same as the other senses then there is no “hard problem of consciousness.” There is still a difficult and complex problem of building a machine that has organs similar to the hippocampus and neocortex, but there is no “hard” or insoluble problem of consciousness. For example, 100 years ago when the camera was being built, was there a “hard problem of sight” and we couldn’t possibly make a camera render a representation of the world? No. We made lenses and light-reactive chemical films that could mimic the eye. When we make prosthetic limbs is there a “hard problem of touch”? No. We create motorized hands with delicate electronic sensors that mimic the carnal motors and neurological sensors of the hand. Is it a “hard problem of sight” when Facebook detects you and your friends’ faces in a picture? Or when Google self-driving car sees and negotiates its surroundings? Or when Shazam or Siri listens to songs or our voices? No. Of course a computer’s detection of a face or listening to music is different from ours, but so is the detection of a face by a praying mantis (which can detect faces and keep eye-contact . . . creepy!) or the way whales hear each other’s songs. The hard part of the “hard problem of consciousness” only arises when we consider feelings, thoughts, and urges as somehow categorically different or higher than the senses.

So what happens when an AI beings to decide things, to make changes to itself or to consider itself? What happens when an AI starts to “like” things on Facebook? Makes discoveries independent of human planning? In a way, this is all very ordinary. That would be extraordinary, but not magical. If our internal senses and external ones are categorically the same, AI or its inventors are not overcoming the “hard problem of consciousness,” they are overcoming the mind-boggling complex problem of modeling the internal senses.

When I consider consciousness as a set of internal senses instead of a heroic, magical faculty, I feel humbled about humanity, and can see clearly the very subtle difference between a humanist and a Buddhist. Humanity is not the master of the garden. Instead we are a bright and lively element enmeshed deeply in the cosmic quilt of possibilities and conditions. Will a super intelligent being see its own being in the same way? I sure hope so.