Ever since Charles Darwin published On the Origin of Species in 1859, evolution has been the grand unifying theory of biology. Yet one of our most important biological traits, consciousness, is rarely studied in the context of evolution. Theories of consciousness come from religion, from philosophy, from cognitive science, but not so much from evolutionary biology. Maybe that’s why so few theories have been able to tackle basic questions such as: What is the adaptive value of consciousness? When did it evolve and what animals have it?

The Attention Schema Theory (AST), developed over the past five years, may be able to answer those questions. The theory suggests that consciousness arises as a solution to one of the most fundamental problems facing any nervous system: Too much information constantly flows in to be fully processed. The brain evolved increasingly sophisticated mechanisms for deeply processing a few select signals at the expense of others, and in the AST, consciousness is the ultimate result of that evolutionary sequence. If the theory is right—and that has yet to be determined—then consciousness evolved gradually over the past half billion years and is present in a range of vertebrate species.

Neurons act like candidates in an election, each one shouting and trying to suppress its fellows.

Even before the evolution of a central brain, nervous systems took advantage of a simple computing trick: competition. Neurons act like candidates in an election, each one shouting and trying to suppress its fellows. At any moment only a few neurons win that intense competition, their signals rising up above the noise and impacting the animal’s behavior. This process is called selective signal enhancement, and without it, a nervous system can do almost nothing.

We can take a good guess when selective signal enhancement first evolved by comparing different species of animal, a common method in evolutionary biology. The hydra, a small relative of jellyfish, arguably has the simplest nervous system known—a nerve net. If you poke the hydra anywhere, it gives a generalized response. It shows no evidence of selectively processing some pokes while strategically ignoring others. The split between the ancestors of hydras and other animals, according to genetic analysis, may have been as early as 700 million years ago. Selective signal enhancement probably evolved after that.

The arthropod eye, on the other hand, has one of the best-studied examples of selective signal enhancement. It sharpens the signals related to visual edges and suppresses other visual signals, generating an outline sketch of the world. Selective enhancement therefore probably evolved sometime between hydras and arthropods—between about 700 and 600 million years ago, close to the beginning of complex, multicellular life. Selective signal enhancement is so primitive that it doesn’t even require a central brain. The eye, the network of touch sensors on the body, and the auditory system can each have their own local versions of attention focusing on a few select signals.

The next evolutionary advance was a centralized controller for attention that could coordinate among all senses. In many animals, that central controller is a brain area called the tectum. (“Tectum” means “roof” in Latin, and it often covers the top of the brain.) It coordinates something called overt attention – aiming the satellite dishes of the eyes, ears, and nose toward anything important.

All vertebrates—fish, reptiles, birds, and mammals—have a tectum. Even lampreys have one, and they appeared so early in evolution that they don’t even have a lower jaw. But as far as anyone knows, the tectum is absent from all invertebrates. The fact that vertebrates have it and invertebrates don’t allows us to bracket its evolution. According to fossil and genetic evidence, vertebrates evolved around 520 million years ago. The tectum and the central control of attention probably evolved around then, during the so-called Cambrian Explosion when vertebrates were tiny wriggling creatures competing with a vast range of invertebrates in the sea.

Even if you’ve turned your back on an object, your cortex can still focus its processing resources on it.

The tectum is a beautiful piece of engineering. To control the head and the eyes efficiently, it constructs something called an internal model, a feature well known to engineers. An internal model is a simulation that keeps track of whatever is being controlled and allows for predictions and planning. The tectum’s internal model is a set of information encoded in the complex pattern of activity of the neurons. That information simulates the current state of the eyes, head, and other major body parts, making predictions about how these body parts will move next and about the consequences of their movement. For example, if you move your eyes to the right, the visual world should shift across your retinas to the left in a predictable way. The tectum compares the predicted visual signals to the actual visual input, to make sure that your movements are going as planned. These computations are extraordinarily complex and yet well worth the extra energy for the benefit to movement control. In fish and amphibians, the tectum is the pinnacle of sophistication and the largest part of the brain. A frog has a pretty good simulation of itself.

With the evolution of reptiles around 350 to 300 million years ago, a new brain structure began to emerge – the wulst. Birds inherited a wulst from their reptile ancestors. Mammals did too, but our version is usually called the cerebral cortex and has expanded enormously. It’s by far the largest structure in the human brain. Sometimes you hear people refer to the reptilian brain as the brute, automatic part that’s left over when you strip away the cortex, but this is not correct. The cortex has its origin in the reptilian wulst, and reptiles are probably smarter than we give them credit for.

The cortex is like an upgraded tectum. We still have a tectum buried under the cortex and it performs the same functions as in fish and amphibians. If you hear a sudden sound or see a movement in the corner of your eye, your tectum directs your gaze toward it quickly and accurately. The cortex also takes in sensory signals and coordinates movement, but it has a more flexible repertoire. Depending on context, you might look toward, look away, make a sound, do a dance, or simply store the sensory event in memory in case the information is useful for the future.

The most important difference between the cortex and the tectum may be the kind of attention they control. The tectum is the master of overt attention—pointing the sensory apparatus toward anything important. The cortex ups the ante with something called covert attention. You don’t need to look directly at something to covertly attend to it. Even if you’ve turned your back on an object, your cortex can still focus its processing resources on it. Scientists sometimes compare covert attention to a spotlight. (The analogy was first suggested by Francis Crick, the geneticist.) Your cortex can shift covert attention from the text in front of you to a nearby person, to the sounds in your backyard, to a thought or a memory. Covert attention is the virtual movement of deep processing from one item to another.

The cortex needs to control that virtual movement, and therefore like any efficient controller it needs an internal model. Unlike the tectum, which models concrete objects like the eyes and the head, the cortex must model something much more abstract. According to the AST, it does so by constructing an attention schema—a constantly updated set of information that describes what covert attention is doing moment-by-moment and what its consequences are.

“I’ve got something intangible inside me. It’s not an eyeball or a head or an arm. It exists without substance …”

Consider an unlikely thought experiment. If you could somehow attach an external speech mechanism to a crocodile, and the speech mechanism had access to the information in that attention schema in the crocodile’s wulst, that technology-assisted crocodile might report, “I’ve got something intangible inside me. It’s not an eyeball or a head or an arm. It exists without substance. It’s my mental possession of things. It moves around from one set of items to another. When that mysterious process in me grasps hold of something, it allows me to understand, to remember, and to respond.”

The crocodile would be wrong, of course. Covert attention isn’t intangible. It has a physical basis, but that physical basis lies in the microscopic details of neurons, synapses, and signals. The brain has no need to know those details. The attention schema is therefore strategically vague. It depicts covert attention in a physically incoherent way, as a non-physical essence. And this, according to the theory, is the origin of consciousness. We say we have consciousness because deep in the brain, something quite primitive is computing that semi-magical self-description. Alas crocodiles can’t really talk. But in this theory, they’re likely to have at least a simple form of an attention schema.

When I think about evolution, I’m reminded of Teddy Roosevelt’s famous quote, “Do what you can with what you have where you are.” Evolution is the master of that kind of opportunism. Fins become feet. Gill arches become jaws. And self-models become models of others. In the AST, the attention schema first evolved as a model of one’s own covert attention. But once the basic mechanism was in place, according to the theory, it was further adapted to model the attentional states of others, to allow for social prediction. Not only could the brain attribute consciousness to itself, it began to attribute consciousness to others.

If a basic ability to attribute awareness to others is present in mammals and in birds, then it may have an origin in their common ancestor, the reptiles.

When psychologists study social cognition, they often focus on something called theory of mind, the ability to understand the possible contents of someone else’s mind. Some of the more complex examples are limited to humans and apes. But experiments show that a dog can look at another dog and figure out, “Is he aware of me?” Crows also show an impressive theory of mind. If they hide food when another bird is watching, they’ll wait for the other bird’s absence and then hide the same piece of food again, as if able to compute that the other bird is aware of one hiding place but unaware of the other. If a basic ability to attribute awareness to others is present in mammals and in birds, then it may have an origin in their common ancestor, the reptiles. In the AST’s evolutionary story, social cognition begins to ramp up shortly after the reptilian wulst evolved. Crocodiles may not be the most socially complex creatures on earth, but they live in large communities, care for their young, and can make loyal if somewhat dangerous pets.

If AST is correct, 300 million years of reptilian, avian, and mammalian evolution have allowed the self-model and the social model to evolve in tandem, each influencing the other. We understand other people by projecting ourselves onto them. But we also understand ourselves by considering the way other people might see us. Data from my own lab suggests that the cortical networks in the human brain that allow us to attribute consciousness to others overlap extensively with the networks that construct our own sense of consciousness.

Language is perhaps the most recent big leap in the evolution of consciousness. Nobody knows when human language first evolved. Certainly we had it by 70 thousand years ago when people began to disperse around the world, since all dispersed groups have a sophisticated language. The relationship between language and consciousness is often debated, but we can be sure of at least this much: once we developed language, we could talk about consciousness and compare notes. We could say out loud, “I’m conscious of things. So is she. So is he. So is that damn river that just tried to wipe out my village.”

If the wind rustles the grass and you misinterpret it as a lion, no harm done. But if you fail to detect an actual lion, you’re taken out of the gene pool.

Maybe partly because of language and culture, humans have a hair-trigger tendency to attribute consciousness to everything around us. We attribute consciousness to characters in a story, puppets and dolls, storms, rivers, empty spaces, ghosts and gods. Justin Barrett called it the Hyperactive Agency Detection Device, or HADD. One speculation is that it’s better to be safe than sorry. If the wind rustles the grass and you misinterpret it as a lion, no harm done. But if you fail to detect an actual lion, you’re taken out of the gene pool. To me, however, the HADD goes way beyond detecting predators. It’s a consequence of our hyper-social nature. Evolution turned up the amplitude on our tendency to model others and now we’re supremely attuned to each other’s mind states. It gives us our adaptive edge. The inevitable side effect is the detection of false positives, or ghosts.

And so the evolutionary story brings us up to date, to human consciousness—something we ascribe to ourselves, to others, and to a rich spirit world of ghosts and gods in the empty spaces around us. The AST covers a lot of ground, from simple nervous systems to simulations of self and others. It provides a general framework for understanding consciousness, its many adaptive uses, and its gradual and continuing evolution.