Sometimes reading a flawed argument triggers my rage, I really do get angry, a phenomenon that invariably surprises and amuses me. What follows is my attempt to use my anger in a constructive way, it may include elements of a jerk reaction*, but I’ll try to keep my emotions in check.

Dr. Epstein recently published a badly misguided essay on Aeon, entitled “The empty brain“, the subtitle makes it clear what the intended take home message is: “Your brain does not process information, retrieve knowledge or store memories. In short: your brain is not a computer“.

Unfortunately, the essay is systematically wrong: virtually every key passage is mistaken, and yet, overall, it tries to make an argument that is worth making. Thus, I grew annoyed by the mistakes and misrepresentations (my immediate comment was “this is so wrong it hurts”), and then descended into anger because Epstein is actually damaging the credibility of an approach that I find promising, but is all too often misunderstood or straw-manned.

In what follows, I will blatantly ignore the first rule of civilised debate: I will not try to give a charitable reading of the original essay. I won’t because it would effectively hide the reasons for writing my reply. Instead, I will report the key arguments proposed by Dr. Epstein, explain why I think they are wrong, and then finish off by outlining why I nevertheless sympathise with some of the science it endorses (as I understand it).

Epstein’s essay starts by defining the overall aim:

The human brain isn’t really empty, of course. But it does not contain most of the things people think it does – not even simple things such as ‘memories’.

My nonsense detectors already started to make noises: we can remember lots of stuff, so it’s undeniable that we do contain memories. Perhaps he meant that memories are surprisingly different from what we might think they are? The essay then states that:

For more than half a century now, psychologists, linguists, neuroscientists and other experts on human behaviour have been asserting that the human brain works like a computer.

To see how vacuous this idea is, consider the brains of babies.

Unfortunately, what follows doesn’t show “how vacuous this idea is” it merely re-states the point. The real trouble starts when actual computers are described:

Computers, quite literally, process information – numbers, letters, words, formulas, images. The information first has to be encoded into a format computers can use, which means patterns of ones and zeroes (‘bits’) organised into small chunks (‘bytes’).

[…]

I need to be clear: computers really do operate on symbolic representations of the world. They really store and retrieve. They really process. They really have physical memories. They really are guided in everything they do, without exception, by algorithms.

Uh oh. Dr. Epstein made it very clear that he doesn’t understand computers. In fact, computers don’t contain zeroes and ones (or images, or symphonies, or texts…), they contain physical stuff, highly organised in precise and changeable structures which can be interpreted as zeros and ones, which in turn, can be interpreted as representations of virtually anything. This point is crucial and something which I’ve discussed at length before (in the context of “brain/mind” sciences, see here and here): importantly, computers are designed to make their behaviour predictable and understandable. Because of their designed features, the interpretation of their inner working becomes relatively easy, and thus it becomes possible (not utterly wrong) to say that they “really” do operate on symbolic representations. However, this is true because we explicitly design the interpretation maps (we write the software), in other words, the symbolic nature of what happens inside our computers is true in virtue of what happens within the brains of people (who design, program and use computers), there is nothing intrinsic in a computer that makes its internal patterns of electrical activity “stand for” this or that. That’s to say that one could also make the opposite case, and point out that physical computers are just a bunch of mechanisms in motion, and conclude that computers don’t process information at all. That would be formally defensible, but absurd, right? Indeed, it would be: the whole point of computers is to process information, thus, even if producing an explanation of how they work which completely ignores any concept of information is entirely possible, it would be useless if our aim is to understand why computers behave in certain ways. Information is in the eye of the beholder, and that is precisely why it’s a useful concept. Furthermore, it is entirely possible and appropriate to describe information in terms of underlying structures.

To make the concept even more clear, let’s look at another biological phenomenon: inheritance and DNA. You can (and should) describe DNA in structural terms: things like the double helix, the shape of nucleotides, the molecular mechanisms of DNA replication, of protein synthesis and so forth. However, once all of the above is done, it is handy to also describe stretches of DNA in terms of pure information, namely the sequence of nucleotides, represented by the letters A, T, C and G. Thus a stretch of DNA can be effectively described by something like this:

The image above is a representation of the gene which encodes for Insulin. Crucially, it is this kind of description which enabled the production of synthetic Insulin and thus the production of cheaper and safer medication. My point: both a purely structural and a purely information-centric descriptions of the Insulin gene are possible. The latter is more abstract, and because of that it is frequently more useful.

In the same way, describing the inner workings of computers in terms of information makes perfect sense, but doesn’t negate that a more accurate description would involve physical mechanisms.

Back to Epstein’s essay. So far, we’ve established that his crucial point (“Computers, quite literally, process information“) is at best misleading: they do, but we may say so because it is a useful way to conceptualise how computers operate. In another sense, computers don’t process information, they just shuffle electrical charges, Information Processing (IP) is merely a useful interpretation, arbitrarily added by us, the observers.

The essay continues by remarking that historically bodies and then brains have been described by means of metaphors, employing the most advanced technologies known at a given time. Currently, digital technologies are used, so we may be entitled to predict that as technology advances, we will stop using the silly metaphor of IP and jump on the next bandwagon (also: where is the dichotomy between metaphors and “actual knowledge” coming from?) . This may be, but again, it’s a misleading way of looking at what happened: once technology started to produce complex-enough mechanisms, it became possible to conceive the idea that organisms may be nothing more than complicated mechanisms. Subsequently, once Information Theory (Shannon’s – SI) was developed, it became possible to describe dynamic structures in terms of their informational content (storage, signalling and processing). As exemplified by my detour on molecular biology, it happens that this new, more abstract way of describing stuff is frequently very useful, and thus people are looking at the inner workings of brains and nervous systems also by employing the informational metaphor. When an action potential travels along an axon, it is natural, handy and useful to describe the shuffling of ions as a travelling signal. If you do, you are already using the IP metaphor: if it’s a signal, we are already describing it in SI’s terms.

The following step should really clarify where my anger comes from. Apparently Dr. Epstein finds it surprising that neuroscientists don’t know how to describe their subject without deploying IP. He believes they should avoid IP altogether, because, according to him, it’s clearly wrong:

The faulty logic of the IP metaphor is easy enough to state. It is based on a faulty syllogism – one with two reasonable premises and a faulty conclusion. Reasonable premise #1: all computers are capable of behaving intelligently. Reasonable premise #2: all computers are information processors. Faulty conclusion: all entities that are capable of behaving intelligently are information processors.

Few little problems here! First of all, the IP metaphor is pervasive because it’s useful, as I’ve demonstrated above. Second, I’ve never heard, and have no need to deploy such a silly syllogism. The reasoning I’m defending is that it is reasonable to interpret complex control mechanisms in terms of information processing. Brains are complex control mechanism, and therefore it is reasonable to deploy the IP metaphor when describing and studying their inner workings.

Moving on, Dr. Epstein then attempts to demonstrate that the IP metaphor is damaging neuroscience. To do so, he makes a really important observation: when asked to draw a one-dollar note, people will perform poorly if they do so without having an actual note to copy. This is an important thing to note: people can draw something which resembles the original in important aspects, but most of the details will be missing. The correct conclusion is that our brains are not optimised to store faithful representations, and that whatever it is that they store, it is usually very sketchy. In other words, efficiency and efficacy are normally favoured, accuracy isn’t. Jumping from this observation to the conclusion that the information needed to produce a gross sketch of a one dollar bill isn’t somehow present in the brain is so blatantly wrong that I don’t even know how to refute it. Unfortunately, it seems that Dr. Epstein wants us to actually draw this absurd conclusion (the “any sense” clause is deal breaker):

[N]o image of the dollar bill has in any sense been ‘stored’ in Jinny’s brain. She has simply become better prepared to draw it accurately, just as, through practice, a pianist becomes more skilled in playing a concerto without somehow inhaling a copy of the sheet music.

In fairness, Dr. Epstein then tries to make a subtler point:

As we navigate through the world, we are changed by a variety of experiences.

[…]

no one really has the slightest idea how the brain changes after we have learned to sing a song or recite a poem. But neither the song nor the poem has been ‘stored’ in it. The brain has simply changed in an orderly way that now allows us to sing the song or recite the poem under certain conditions.

In other words, seeing a banknote, hearing a song – even more, singing a song – will change some structural element inside us, presumably in the brain. Fine, this is manifestly what every neuroscientist thinks is happening. Thus, because we can link structures and structural changes to information and information processing, we can, if desired, deploy the Information Processing metaphor. In other words, Dr. Epstein has so far proposed a number of questionable claims, peppered with one interesting observation (which manifestly refutes one of the intended take-home messages): whatever it is that our brains do store, it apparently is surprisingly inaccurate.

At this point it goes on to both promote and misrepresent a branch of Cognitive Science which I find very interesting, promising and rightly controversial, that is: Radical Embodiment.

The mainstream view is that we, like computers, make sense of the world by performing computations on mental representations of it, but Chemero and others describe another way of understanding intelligent behaviour – as a direct interaction between organisms and their world.

Why do I say “rightly controversial”? Because if one interprets it as above, the whole idea fails to make sense. Hypothesising “a direct interaction between organisms and their world” means that there would be nothing to learn in studying the mechanisms which mediate the interactions and happen to occur inside bodies (would count as indirect?). In other words, it declares the reductionist approach a dead-end a priori. Trouble is, nobody does this: we do study how sensory signals travel along nerves towards the central nervous system and also what happens within brains in similar ways. The only problem I have with Radical Embodiment is that it might superficially seem to espouse such a view, while I happen to think that it tries to do something which is much more important, and orders of magnitudes more useful.

Radical Embodiment is challenging our understanding of “representations” and showing how they are far less “information rich” than what our common intuitions would suggest. It is doing so by showing how much the interaction with the world is necessary for guiding and fine-tuning behaviour. It does challenge the idea that we hold detailed models of the world and interact with those (instead of interacting with the world), and does so for a lot of good reasons, but, as exemplified in this brief exchange, it does not challenge the IP metaphor, it is merely showing how to apply it better!

Dr. Epstein goes on by citing reputable sources and even mentions Andrew Wilson and Sabrina Golonka’s blog (see also their excellent Twitter feed), which happens to be one of my favourite corners of the Internet.

This is one reason why I’m writing all this: if I’m right, Dr. Epstein is badly misrepresenting the Radical Embodiment idea, and in doing so he is unnecessarily making it look mistaken and indefensible. Far from it, it is something that is worth a lot of attention and careful study. To say it with the always thought-provoking words of Wilson and Golonka (2013), the main idea behind the movement is:

Embodiment is the surprisingly radical hypothesis that the brain is not the sole cognitive resource we have available to us to solve problems.

To me, it is self-evident that this radical idea is basically correct, and at the same time, it is a reason why it is so difficult to figure out how brains work. One needs to account for much more than just neurons… At the same time, while I do accept the basic idea without reservations, I am also worried that, as exemplified by the short discussion I’ve linked above, radically rejecting all uses of the “representation” concept isn’t going to work: what needs to be done is different, but perhaps something that is best left for another time.

Overall, Cognitive Neuroscience is tricky, it is prohibitively hard, and, as I argue in the introduction here, it is of paramount importance to carefully select the correct metaphors in order to convincingly describe the vast number of different phenomena occurring at different scales (from the psychological, to the neural, down at least to the molecular). In this context, expecting that at one or more of these levels the IP metaphor will prove to be useful (as it is in the case of computers) is entirely justified. Challenging the consensus is something that scientists probably aren’t doing enough, but alas, Dr. Epstein’s attempt is unfortunately failing to do so.

Notes and Bibligraphy:

*It’s even more interesting to note that when I write an angry reaction the resulting posts frequently happen to be among the most popular on this blog, see for example here (with follow-up) and here. It’s also worth noting that the essay I’m criticising has collected a very high number of negative comments, see the one from Jackson Kernion in particular.

Wilson, A., & Golonka, S. (2013). Embodied Cognition is Not What you Think it is Frontiers in Psychology, 4 DOI: 10.3389/fpsyg.2013.00058