Each evening, after the children were in bed, she would teach Paul everything she had learned that day, and they would talk about what it meant for philosophy. They later discovered, for instance, that the brain didn’t store different sorts of knowledge in particular places—there was no such thing as a memory organ. Even dedicated areas like the visual cortex could be surprisingly plastic: blind people, and people who could see but had been blindfolded for a few days, used the visual cortex to read Braille, even though that would seem to be a thoroughly tactile activity. All this boded well for Paul’s theory that folk-psychological terms would gradually disappear—if concepts like “memory” or “belief” had no distinct correlates in the brain, then those categories seemed bound, sooner or later, to fall apart.

Gradually, Pat and Paul arrived at various shared notions about what philosophy was and what it ought to be. They agreed that it should not keep itself pure: a philosophy that confined itself to logical truths, seeing itself as a kind of mathematics of language, had sealed itself inside a futile, circular system of self-reference. Why shouldn’t philosophy concern itself with facts? Why shouldn’t it get involved with the uncertain conjectures of science? Who cared whether the abstract concepts of action or freedom made sense or not? Surely it was more interesting to think about what caused us to act, and what made us less or more free to do so? Yes, those sounded more like scientific questions than like philosophical ones, but that was only because, over the years, philosophy had ceded so much of the interesting territory to science. Why shouldn’t philosophy be in the business of getting at the truth of things?

They were confident that they had history on their side. In the classical era, there had been no separation between philosophy and science, and most of the men whom people now thought of as philosophers were scientists, too. They were thought of as philosophers now only because their scientific theories (like Aristotle’s ideas on astronomy or physics, for instance) had proved to be, in almost all cases, hopelessly wrong. Over the years, different groups of ideas had hived off the mother sun of natural philosophy and become proper experimental disciplines—first astronomy, then physics, then chemistry, then biology, psychology, and, most recently, neuroscience. Becoming an experimental discipline meant devising methods that allowed propositions to be tested that had previously been mere speculation. But it did not mean that a discipline had no further need of metaphysics—what, after all, would be the use of empirical methods without propositions to test in the first place? Philosophy could still play a role in science: it could examine the concepts that scientists were working with, testing them for coherence, and it could serve as science’s speculative branch, imagining hypotheses that were too outlandish or too provisional for a working scientist to bother with but which might, in the future, yield unexpected fruit.

In 1974, when Pat was studying the brain in Winnipeg and Paul was working on his first book, Thomas Nagel, a philosopher at Princeton who practiced just the sort of philosophy that they were trying to define themselves against, published an essay called “What Is It Like to Be a Bat?” Imagine being a bat, Nagel suggested. You are small and covered with thin fur; you have long, thin arms attached to your middle with webbing; you are nearly blind. During the day, you hang upside down, asleep, your feet gripping a branch or a beam; at dusk you wake up and fly about, looking for insects to eat, finding your way with little high-pitched shrieks from whose echoes you deduce the shape of your surroundings. “Insofar as I can imagine this (which is not very far),” he wrote, “it tells me only what it would be like for me to behave as a bat behaves. But that is not the question. I want to know what it is like for a bat to be a bat.”

The purpose of this exercise, Nagel explained, was to demonstrate that, however impossible it might be for humans to imagine, it was very likely that there was something it was like to be a bat, and that thing, that set of facts—the bat’s intimate experience, its point of view, its consciousness—could not be translated into the sort of objective language that another creature could understand. Humans might eventually understand pretty much everything else about bats: the microchemistry of their brains, the structure of their muscles, why they sleep upside down—all those things were a matter of analyzing the physical body of the bat and observing how it functioned, which was, however difficult, just part of ordinary science. But what it is like to be a bat was permanently out of the reach of human concepts.

This shouldn’t be surprising, Nagel pointed out: to be a realist is to believe that there is no special, magical relationship between the world and the human mind, and that there are therefore likely to be many things about the world that humans are not capable of grasping, just as there are many things about the world that are beyond the comprehension of goats. But if the bat’s consciousness—the what-it-is-like-to-be-a-bat—is not graspable by human concepts, while the bat’s physical makeup is, then it is very difficult to imagine how humans could come to understand the relationship between them. To describe physical matter is to use objective, third-person language, but the experience of the bat is irreducibly subjective. There is a missing conceptual link between the two—what later came to be called an “explanatory gap.” To argue, as some had, that linking consciousness to brain was simply a matter of declaring an identity between them—the mind just is the brain, and that’s all there is to it, the way that water just is H2O—was to miss the point.

Nagel’s was the sort of argument that represented everything Pat couldn’t stand about philosophy. “Various philosophers today think that science is never going to be able to understand consciousness,” she said in her lectures, “and one of their most appealing arguments—I don’t know why it’s appealing, but it seems to be—is ‘I can’t imagine how you could get pain out of meat, I can’t imagine how you could get seeing the color blue out of neurons firing.’ Now, whether you can or can’t imagine certain developments in neuroscience is not an interesting metaphysical fact about the world—it’s a not very interesting psychological fact about you.” But when she mocked her colleagues for examining their intuitions and concepts rather than looking to neuroscience she rarely acknowledged that, for many of them, intuitions and concepts were precisely what the problem of consciousness was about. Those were the data. Most of them were materialists: they were convinced that consciousness somehow is the brain, but they doubted whether humans would ever be able to make sense of that.

Part of the problem was that Pat was by temperament a scientist, and, as the philosopher Daniel Dennett has pointed out, in science a counterintuitive result is prized more than an expected one, whereas in philosophy, if an argument runs counter to intuition, it may be rejected on that ground alone. “Given a knockdown argument for an intuitively unacceptable conclusion, one should assume there is probably something wrong with the argument that one cannot detect,” Nagel wrote in 1979. “To create understanding, philosophy must convince. That means it must produce or destroy belief, rather than merely provide us with a consistent set of things to say. And belief, unlike utterance, should not be under the control of the will, however motivated. It should be involuntary.” The divide between those who, when forced to choose, will trust their instincts and those who will trust an argument that convinces them is at least as deep as the divide between mind-body agnostics and committed physicalists, and lines up roughly the same way.

When Pat first started going around to philosophy conferences and talking about the brain, she felt that everyone was laughing at her. Even thoroughgoing materialists, even scientifically minded ones, simply couldn’t see why a philosopher needed to know about neurons. Part of the problem was that, at the time, during the first thrilling decades of artificial intelligence, it seemed possible that computers would soon be able to do everything that minds could do, using silicon chips instead of brains. So if minds could run on chips as well as on neurons, the reasoning went, why bother about neurons? If the mind was, in effect, software, and if the mind was what you were interested in, then for philosophical purposes surely the brain—the hardware—could be regarded as just plumbing. Nobody thought it was necessary to study circuit boards in order to talk about Microsoft Word. A philosopher of mind ought to concern himself with what the mind did, not how it did it. Moreover, neuroscience was working at the wrong level: tiny neuronal structures were just too distant, conceptually, from the macroscopic components of thought, things like emotions and beliefs.