In response to:

Storm Over the Brain from the April 24, 2014 issue

To the Editors:

In Neurophilosophy I made the point that if you want to understand the mind, you need to understand the brain.1 In his review of my recent book Touching a Nerve: The Self as Brain, Colin McGinn gibbles up this simple message [NYR, April 24]. Here is the thing: there is a difference between a necessary condition and a sufficient condition. Oxygen is a necessary condition for burning wood; it is not a sufficient condition. Sufficiency requires much else besides, including bringing the wood to a kindling point, and so forth. When I said it is necessary to understand the brain in order to understand the mind, I am talking about a necessary condition. I am not talking about a sufficient condition.

In fact, I have famously argued for the coevolution of sciences at many levels. This explicitly includes the coevolution of the neurosciences and psychological sciences. To my delight but not surprise, the coevolution is now well underway in research on memory, attention, sensory systems, consciousness, and self-control, for example.

Other scientific disciplines are also extremely important in understanding the nature of the mind: genetics, ethology, anthropology, and linguistics. Philosophy can play a role too, when the philosopher sees it is rewarding to get out of the armchair. Some philosophers, such as Chris Eliasmith, for example, have truly made progress in computationally modeling how the brain represents the world.

Nevertheless, there are nostalgic philosophers who whinge on about saving the purity of the discipline from philosophers like me and Chris Eliasmith and Owen Flanagan and Dan Dennett. What do the purists, like McGinn, object to? It is that their lovely a priori discipline, where they just talk to each other and maybe cobble together a thought experiment or two, is being sullied by…data. Their sterile construal of philosophy is not one that would be recognized by the great philosophers in the tradition, such as Aristotle or Hume or Kant.

Nobody in neuroscience needs McGinn to tell us that structural correlates of a function do not ipso facto explain that function. His sermonizing is just so much spit in the wind. What he fails to get is that sussing out structure is often a major step forward in sorting out mechanism. Famously this was so when Watson and Crick figured out the structure of DNA. Neither they nor anyone else supposed that the mechanism of how proteins got made could therefore just be read off the helical structure. First, a lot more had to be discovered, such as the role of RNA, the existence of both messenger RNA and transfer RNA, the role of ribosomes, and a whole lot else besides. But discovering the helical structure of four base pairs was a huge step. This is likely to be true also in neuroscience.

The view for which McGinn is known is a jejune prediction, namely that science cannot ever solve the problem of how the brain produces consciousness. On what does he base his prediction? Flimsy stuff. First, he is pretty sure our brain is not up to the job. Why not? Try this: a blind man does not experience color, and he will not do so even when we explain the brain mechanisms of experiencing color. Added to which, McGinn says that he cannot begin to imagine what it is like to be a bat, or how conscious experience might be scientifically explained (his brain not being up to the job, as he insists). This cognitive inadequacy he deems to have universal epistemological significance.

Alongside the arrogance, here is one whopping flaw: no causal explanation for a phenomenon, such as color vision, should be expected to actually produce that phenomenon. Here is why: the neural pathways involved in visually experiencing color are not the same pathways as those involved in intellectually understanding the mechanisms for experiencing color. Roughly speaking, experiencing color depends on areas in the back of the brain (visual areas) and intellectual understanding of an explanation depends on areas in the front of the brain.

Likewise, I might be able to understand in great detail the mechanisms underlying pregnancy. But I do not expect such understanding to result in my becoming pregnant. A very different causal pathway is needed to achieve pregnancy. Ditto for the difference between understanding the mechanisms of experience, and having that very experience. This is not a new point.2

Anyhow, understanding a natural phenomenon is not an all-or-nothing affair. Do we understand how genes work? Yes, up to a point, but there are many things that are not yet understood. Gaps in our knowledge of the brain certainly exist, but undeniably, progress has been made. Some philosophers have elevated their favorite gap in neuroscience to the ontological status of an object, like the Black Hole in the Milky Way Galaxy, and hence they refer reverentially to the Explanatory Gap. Bosh. There are knowledge gaps all over the place, and slowly, many are closing as science proceeds.

McGinn closes with some arch tut-tutting about words. The fact is, language changes as culture changes, including as the scientific culture changes. People used to think that atoms are indivisible particles—and indeed “a-tom” originally meant “not splittable.” But atoms can be split and we still call atoms “atoms.” And brains do sleep, remember spatial locations, and learn to navigate their social and physical worlds. Get used to it.

Patricia Churchland

University of California, San Diego

Colin McGinn replies:

It is just possible to discern some points beneath the heated rhetoric in which Patricia Churchland indulges. But none of these points is right. If you hold that “mental processes are actually processes in the brain,” to quote Churchland, then you are committed to the thesis that it is sufficient to understand the mind that one understands the brain, and not merely necessary. This is just the well-known “identity theory” of mind and brain: mental processes are identical to brain processes; and the identity of a with b entails the sufficiency of a for b. To hold the weaker thesis that knowledge of the brain is merely necessary for knowledge of the mind is consistent even with being a heavy-duty Cartesian dualist, since even such a dualist accepts that mind depends causally on brain.

Churchland suggests that I am a philosophical purist who eschews all interdisciplinary cooperation. It is true that I think many philosophical problems cannot in principle be solved by science (a very common opinion), but that does not imply that I avoid interdisciplinary work or disapprove of it. On the contrary, I have done quite a bit of such work myself—connecting philosophy to physics, biology, psychology, film theory, and literary studies—and I admire the work of others in this vein.

The point about the explanatory gap is not that structure alone fails to provide a full explanation of consciousness. It is that even knowledge of brain structures and brain mechanisms is not sufficient for explaining consciousness. I cannot go into this issue here, which has been widely discussed in the literature, but the intuitive point is clear enough: consciousness as it presents itself to introspection appears to be just a different kind of thing from activity in the brain. If this were not so, no one would ever have been a dualist.

Churchland’s account of my arguments for our cognitive limitations with respect to explaining consciousness bears little relation to what I have written in several books, as anyone who has dipped into those books will appreciate. What she refers to as a “whopping flaw” in my position (and that of many others) is simply a complete misreading of what has been argued: the point is not that having a causal explanation for a phenomenon should produce that phenomenon, so that a blind man will be made to see by having a good theory of vision. The point is rather that a blind man will not understand what color vision is merely by finding out about the brain mechanisms that underlie it, since he needs acquaintance with the color experiences themselves.

I was not “tut-tutting about words”: I was saying that it is factually false to describe groups of neurons as making decisions (the “homunculus fallacy”).

I can see nothing enlightening in this exchange.