The final brief in our series looks at the most profound scientific mystery of all: the one that defines what it means to be human

“I THINK, therefore I am.” René Descartes’ aphorism has become a cliché. But it cuts to the core of perhaps the greatest question posed to science: what is consciousness? The other phenomena described in this series of briefs—time and space, matter and energy, even life itself—look tractable. They can be measured and objectified, and thus theorised about. Consciousness, by contrast, is subjective. As Descartes’ observation suggests, a conscious being knows he is conscious. But he cannot know that any other being is. Other apparently conscious individuals might be zombies programmed to behave as if they were conscious, without actually being so.

In reality, it is unlikely that even those who advance this proposition truly believe it, as far as their fellow humans are concerned. Cross the species barrier, however, and matters become muddier. Are chimpanzees conscious? Dogs? Codfish? Bees? It is hard to know how to ask them the question in a meaningful way.

Moreover, consciousness is not merely a property of having a complex, active brain, for it can vanish temporarily, even while the brain is healthy and functional. Most people spend a third of their lives in the state described as “sleep”. Unless awoken while dreaming, they have no sense of being conscious during these periods. Recordings of the brain’s electrical activity show, though, that a sleeping brain is often as busy as one that is awake. Subjective though it is, consciousness therefore looks like a specific phenomenon, not a mere side-effect. That suggests it has evolved, and has a biological purpose. These things—specificity and purpose—give researchers something to hang on to.

A lot of brain science relies on looking at brains that are broken. Studying consciousness is no exception. One of the most intriguing examples has emerged from work, started in the early 1970s by Lawrence Weiskrantz of Oxford University, on a phenomenon called blindsight.

Blindsight is occasionally found in those whose blindness is caused by damage to the visual cortex of the brain, perhaps by a stroke or tumour, rather than by damage to the eyes or optic nerves. Those who have blindsight have no conscious awareness of being able to see. They are nevertheless able to point to, and even grasp, objects in their visual fields.

A dip in the stream of consciousness

Blindsight is an example of how brain damage can abolish the conscious experience of a phenomenon (in this case vision) without abolishing the phenomenon itself. Conversely, apparently full consciousness can be retained in the absence of quite important parts of the brain. One example of this is the case of a Chinese woman born without a cerebellum. This is a structure at the back of the brain which co-ordinates movement. The woman in question thus finds it awkward to move around. But she is completely conscious and is able to describe her experiences. Unlike the visual cortex, then, the cerebellum has no apparent role in generating consciousness.

Observations like this have led to a search for the neural correlates of consciousness—the bits of the brain responsible for generating conscious experience. One of particular interest is the claustrum. This is a candidate because of its extensive connections with other parts of the brain. A crucial property of consciousness is that it integrates many sorts of experience, both sensory and internally generated. Discovering how this integration happens is known as the binding problem. In 2005 a paper published by Francis Crick (posthumously, for he had died the previous year) and Christof Koch (who now works at the Allen Institute for Brain Science, in Seattle) looked at the binding problem. The two researchers lit upon the claustrum as something that might help illuminate it.

The claustra (there are two, one in each cerebral hemisphere—see diagram) are thin sheets of nerve cells tucked below the cerebral cortex that have connections both to and from almost every area of the cortex. They are the only structures that link the various parts of the cortex in this way. Crick and Dr Koch suggested they act like orchestral conductors, co-ordinating the activities of the cortical components and thus solving the binding problem. Doing experiments to test this idea is hard, for the procedures needed (such as the implantation of electrodes) would be intrusive, risky and thus unethical for the mere satisfaction of curiosity. But one such experiment has happened by accident. In 2014 Mohamad Koubeissi, an American neurologist, was trying to hunt down the origin of the epilepsy suffered by one of his patients. To do so he implanted electrodes into her brain—permissible in view of her condition’s seriousness. When he placed one near one of her claustra and switched the current on, she lost consciousness. When he switched the current off, she regained it. When he repeated the procedure several times, he got the same result on each occasion. Another phenomenon correlated with consciousness, which some think may help solve the binding problem, is a pattern of electrical impulses, known as gamma waves, which beat at an average frequency of 40Hz, in synchrony in different parts of a person’s brain. They are strongest during conscious concentration on tasks, are always present when someone is conscious, and largely disappear when he is asleep, unless he is dreaming. Many neuroscientists suspect gamma waves’ synchrony means they are acting like the clock in a computer processor, co-ordinating the activities of disparate parts of the brain—in other words, binding them together.

Yet another neural correlate of consciousness is the temporoparietal junction. Damage to this part of the brain, or use of a technique called transcranial magnetic stimulation (TMS) to deactivate it temporarily, creates intriguing effects. In particular, it can cause out-of-body experiences in which a person’s conscious perception of himself appears (from his point of view) to detach itself from his body.

TMS of the temporoparietal junction also reduces someone’s ability to empathise with the mental states of others. That suggests this part of the brain helps generate “theory of mind”—the ability to recognise that other creatures, too, have minds. Some see this link as more than coincidence. Seeking an evolutionary explanation for consciousness, they suggest that an animal which can model another’s behaviour can gain an advantage by anticipating it. They further suggest that, since the only model available to a mind that wishes to understand another’s is itself, a theory of mind necessarily requires self-awareness. In other words, consciousness.

This bears on the question of how it might be possible to find out if non-human animals are conscious. If being conscious requires the self-awareness that having a theory of mind implies, then those with it might be expected to be able to recognise themselves in a mirror.

Human babies are able to do so from the age of 18 months. That was well-known in 1970, when Gordon Gallup of the State University of New York, Albany, tried the experiment on three other primate species. Previous research had suggested that most animals, when they see themselves in a mirror, respond as to a stranger—often aggressively—and seem unable to learn, no matter how long the mirror is there, to do otherwise. Dr Gallup found that this was indeed true for two species of macaque monkey. But chimpanzees soon learned that the image in the mirror was a reflection of themselves, and even used it as a person might, to assist grooming.

To hold, as ’twere, the mirror up to nature Subsequent mirror studies have looked at bonobos, gorillas, orang-utans, gibbons, many other monkeys, elephants, dogs, dolphins and various birds. Bonobos, orang-utans, elephants, dolphins and magpies react in ways that might be interpreted as self-recognition. Gorillas, gibbons, monkeys, dogs and pigeons do not. Although some psychologists question the value of the mirror test (dogs, for example, rely heavily on smell rather than vision for individual identification, so may simply be uninterested in images of themselves), it does suggest the capacity for self-recognition has emerged independently in animals with differently organised brains. If the phenomenon’s neural correlates could be identified in those brains—admittedly a hard task—then comparative studies might be possible. That would be valuable, as it is difficult to do good science when only one example is available.

What sits behind the looking glass?

And yet. Finding the neural correlates of consciousness, or even understanding what it is for and how it evolved, does not truly address the question of what it actually is—of what it is people are experiencing while they are conscious. This question has come to be known as the “hard problem” of consciousness.

It was so dubbed in 1995, by David Chalmers, an Australian philosopher, and the name encapsulates both the fact that it is hard to resolve and that its resolution is the heart of the matter. Merely calling it hard does not really help the investigator to think about it, but the work of another philosopher, Thomas Nagel, perhaps does. In 1974 Dr Nagel, an American, posed the problem in a novel way, in a paper called, “What is it like to be a bat?”

For the sake of this thought experiment Dr Nagel assumed bats have conscious experience of the world. If they do, though, he suggested that it will be built largely on the basis of a sense—echolocation—which human beings do not possess. A human might, Dr Nagel posits, plausibly imagine some parts of a bat’s experience, such as hanging upside down for long periods, or even flying. But seeing the world through sonar is ineffable to humanity.

The nub of the hard problem, then, is to make this ineffability effable. Other fields of scientific endeavour circumvent ineffability with mathematics. No one can truly conceive of a light-year or a nanosecond, let alone extra dimensions or wave-particle duality, but maths makes these ideas tractable. No such short-cut invented so far can take a human inside the mind of a bat. Indeed, for all the sophistication of theory-of-mind it is difficult, as everyday experience shows, to take a human being inside the mind of another human being. The hard problem may thus turn out to be the impossible problem, the one that science can never solve. The Oracle at Delphi said, “know thyself.” Difficult. But a piece of cake compared with knowing others.

A video to accompany this brief is available at economist.com/sb2015