Robots like Allen, and Elmer and Elsie before it, seemed to Clark to represent a fundamentally different idea of the mind. Watching them fumble about, pursuing their simple missions, he recognized that cognition was not the dictates of a high-level central planner perched in a skull cockpit, directing the activities of the body below. Central planning was too cumbersome, too slow to respond to the body’s emergencies. Cognition was a network of partly independent tricks and strategies that had evolved one by one to address various bodily needs. Movement, even in A.I., was not just a lower, practical function that could be grafted, at a later stage, onto abstract reason. The line between action and thought was more blurry than it seemed. A creature didn’t think in order to move: it just moved, and by moving it discovered the world that then formed the content of its thoughts.

The world is a cacophony of screeches and honks and hums and stinks and sweetness and reds and grays and blues and yellows and rectangles and polyhedrons and weird irregular shapes of all sorts and cold surfaces and slippery, oily ones and soft, squishy ones and sharp points and edges; but somehow all of this resolves crisply into an orderly landscape of three-dimensional objects whose qualities we remember and whose uses we understand. How does this happen? The brain, after all, cannot see, or hear, or smell, or touch. It has a few remote devices—the eyes and ears and nose, the hands farther away, the skin—that bring it information from the world outside. But these devices by themselves only transmit the cacophony; they cannot make sense of it.

To some people, perception—the transmitting of all the sensory noise from the world—seemed the natural boundary between world and mind. Clark had already questioned this boundary with his theory of the extended mind. Then, in the early aughts, he heard about a theory of perception that seemed to him to describe how the mind, even as conventionally understood, did not stay passively distant from the world but reached out into it. It was called predictive processing.

Traditionally, perception was thought to work from the bottom up. The eyes, for instance, might take in a variety of visual signals, which resolved into shapes and colors and dimensions and distances, and this sensory information made its way up, reaching higher and higher levels of understanding, until the thing in front of you was determined by the brain to be a door, or a cup. This inductive account sounded very logical and sensible. But there were all sorts of perceptual oddities that it could not make sense of—common optical illusions that nearly everyone was prone to. Why, when you saw a hollow mask from the inner, concave side, did it nonetheless look convex, like a face? Or, when one image was placed in front of your right eye—a closeup face, say—and a very different image, such as a house, was simultaneously placed in front of your left eye, why did you not perceive both images, since you were seeing both of them? Why, instead, did you perceive first one, then the other, as though the brain were so affronted by the preposterous, impossible sight of a face and a house that seemed to be the same size and exist in the same place at once that it made sense of the situation by offering up only one at a time?

It appeared that the brain had ideas of its own about what the world was like, and what made sense and what didn’t, and those ideas could override what the eyes (and other sensory organs) were telling it. Perception did not, then, simply work from the bottom up; it worked first from the top down. What you saw was not just a signal from the eye, say, but a combination of that signal and the brain’s own ideas about what it expected to see, and sometimes the brain’s expectations took over altogether. How could it be that some people saw a dress as white and gold while others saw the same dress as blue and black? Brains did not perceive color straightforwardly: an experienced brain knew that an object would look darker and less vivid in shade than in the sun, and so adjusted its perception of the “true” color based on what it judged to be the object’s situation. (Psychologists speculate that a brain’s assumptions about color may be set by whether a person spends more time in daylight or artificial light.) Perception, then, was not passive and objective but active and subjective. It was, in a way, a brain-generated hallucination: one influenced by reality, but a hallucination nonetheless.

This top-down account of perception had, in fact, been around for more than two hundred years. Immanuel Kant suggested that the mind made sense of the complicated sensory world by means of innate mental concepts. And an account similar to predictive processing was proposed in the eighteen-sixties by the Prussian physicist Hermann von Helmholtz. When Helmholtz was a child, in Potsdam, he walked past a church and saw tiny figures standing in the the belfry; he thought they were dolls, and asked his mother to reach up and get them for him: he did not yet understand the the concept of distance, and how it made things look smaller. When he was older, his brain incorporated that knowledge into its unconscious understanding of the the world—into a set of expectations, or “priors,” distilled from its experience—an understanding so basic that it became a lens through which he couldn’t help but see.

Being prey to some optical tricks—such as the hollow-mask illusion, or not noticing when a little word like “the” gets repeated, as it was three times in the previous paragraph—is a price worth paying for a brain whose controlling expectations make reliable sense of the world. Some schizophrenic and autistic people are strikingly less susceptible to the hollow-mask illusion: their brains do not so easily dismiss sensory information that is unlikely to be true. There are parallel differences with other senses as well. When neurotypical people touch themselves, it feels less forceful than an identical touch from another person, because the brain expects it—which is why it’s hard to tickle yourself. Schizophrenics are better able to tickle themselves—and also more prone to delusions that their own actions are caused by outside forces.

One major difficulty with perception, Clark realized, was that there was far too much sensory signal continuously coming in to assimilate it all. The mind had to choose. And it was not in the business of gathering data for its own sake: the original point of perceiving the world was to help a creature survive in it. For the purpose of survival, what was needed was not a complete picture of the world but a useful one—one that guided action. A brain needed to know whether something was normal or strange, helpful or dangerous. The brain had to infer all that, and it had to do it very quickly, or its body would die—fall into a hole, walk into a fire, be eaten.

So what did the brain do? It focussed on the most urgent or worrying or puzzling facts: those which indicated something unexpected. Instead of taking in a whole scene afresh each moment, as if it had never encountered anything like it before, the brain focussed on the news: what was different, what had changed, what it didn’t expect. The brain predicted that everything would remain as it was, or would change in foreseeable ways, and when that didn’t happen error signals resulted. As long as the predictions were correct, there was no news. But if the signals appeared to contradict the predictions—there is a large dog on your sofa (you do not own a dog)—prediction-error signals arose, and the brain did its best to figure out, as quickly as possible, what was going on. (The dog is actually a crumpled blanket.) This process was not only fast but also cheap—it saved on neural bandwidth, because it took on only the information it needed—which made sense from the point of view of a creature trying to survive.