There is a mistake we all make, and keep making, over and over, at every possible occasion, and will keep making, even after realising that we do. This happens because it is, in general terms, the most useful cognitive mistake that one could dream of. It underlies much of cognition (human or otherwise) and is necessary to us almost as much as oxygen itself. I will not try to get rid of it, but I believe that naming and dissecting it is of paramount importance: the process should deliver insights on the limits of human cognition, and hopefully also some hints on how to work around it.

The latest Edge annual question “What Scientific Idea is Ready for Retirement?” generated at least three answers that are explicitly against Essentialism (Lisa Barrett, Richard Dawkins and Peter Richerson) but there are plenty other answers that can be linked to the same sort of criticism. What puzzles me (and what prompted this post) is that it is apparently necessary to point out the limit of Essentialism; even more surprisingly, I have the somewhat founded suspicion that many professional thinkers do not start their journeys well aware of these limits and of their cognitive foundations. In this post, I will look at the biology of cognition first (a speculative quick glance, as we still know very little!) where the glaringly obvious seed of Essentialism is to be found. I will then very briefly observe the connection with logic and maths, arrive at fully formed Essentialism and provide a couple of examples of its most ugly consequences. In the process I hope to explain a few things:

Essentialism is here to stay: in its most basic forms, it’s inevitable. This introduces the mother of most errors, it “essentially” (!) explains why we can only approach a full understanding of reality, but will never be able to reach it. There are some very important exceptions, as I have noted before. Every self-respecting thinker should be very aware of all this, and should always understand whether her current thoughts apply to the general rule (2.) or the exceptions (3.); failure to do so is one of the reasons why pure thought (and maths) can pull our conclusions into random, and utterly wrong, directions.

The Biological side:

The starting point here is quite simple: our brains deal with symbols. They manipulate representations of reality, and use these virtual manipulations to drive our behaviour. To use an example Paul Bloom has recently employed, if I’m thirsty, I’ll grab a glass and fill it at the tap, because I know what glasses and taps are for. This requires symbolic reasoning, where “glass” and “tap” are symbols that my brain can manipulate, and come associated with the information of what defines them: a glass is a tool that makes drinking easier, a tap is a water-delivery system, and so on. Sure, this description is speculative, but there are plenty of accredited scientists that share this view; for an excellent discussion, see: Marcus G. (2009). How the Mind Work? Insights from Biology, Topics in Cognitive Science, 1 (1) 145-172. DOI: 10.1111/j.1756-8765.2008.01007.x (incidentally, this is one of the best articles I’ve ever read, it is highly recommended reading).

The Formalisation:

One way to describe the idea of symbols comes from logic/maths, in the form of Equivalence Classes (ECs). I will not bore you with the formal definition, and will use the description I was given when the concept was introduced to me at school: Equivalence Classes are labels that can be used as shorthand descriptions of a given collection of qualities. A glass is a glass if it can be used to facilitate drinking is such and such way. ECs are fantastically powerful cognitive tools, because they allow to brush aside all the trivial details, and consider only what is relevant. Another way of saying this is that ECs are the building blocks of all models, and models (of one sort or the other) are necessary for cognition, reasoning and communication.

The Essentialism Fallacy:

The problem is that ECs are so useful (and so unavoidable, as all reasoning can do is manipulate symbols, not the actual objects) that once they are applied to something, it is terribly easy to forget that the symbol is not the real object. If our particular symbol is a very good one, the properties associated with it may indeed include all the information we need to know about an actual object, and sooner or later, we may become completely unaware that by dealing with ECs we are trading off precision in return of handiness. We are applying a useful simplification. This is what I had in mind in my foundation posts, and is another way to explain why reality is, to some extent, unknowable. I can’t stop stressing this point, because it is of massive importance for both science and philosophy. You can see some examples on the importance for science in the Edge links at the top of this post: the common denominator is that scientists use ECs to build useful models, but then get carried away, and for example, start thinking that tigers are characterised by some absolute and objective “tigerness”, whereas this “tigerness” doesn’t really exist, and is only the direct result of how our brains work. Philosophers make the same mistake, and of course, following Plato, may even do it in an explicit and systematic way. Instead, while we are in the business of understanding reality, we should always be aware that “understanding reality” is the process of finding, defining and later exploiting useful simplifications; we are not, in any way or form, identifying and isolating the true essence of real objects, and we are emphatically not finding out what is more real than the real thing. Falling for the Essentialism Fallacy generates all sorts of mistakes, and some of them can explain the worst atrocities of human history.

Essentialist horrors:

I shouldn’t really need to spend too many words on this, as Dawkins’ article makes the case convincingly, but in case you don’t wish to read it, I will reiterate here, in my own words. Let’s start with what should be a scientific subject: life. When does life start and finish? At first sight, it seems pretty obvious: in everyday circumstances, we have little trouble in deciding whether a person is dead or alive, but the trouble is life (as all EC that exist because they are extremely useful simplifications) doesn’t have an essence, its boundaries are blurred, and it is impossible to define them precisely. If I pull a hair out of my head, chances are that the bulb will still be attached. This is made of living cells, so we could conclude that it’s alive, but is it really? Those cells will certainly die outside my body, and never had a chance of “living” independently. My blood is full of living cells, but who considers a drop of blood alive? What about cultured cells on a Petri dish in a lab? They are most certainly alive, but how exactly are they different from the drop of blood? They aren’t: who said it must be impossible to keep alive the white cells found in a drop blood on a Petri dish?

The examples above are trivial, and I used them specifically because troublesome moral implications may otherwise obfuscate my point. The basic idea is that once we start looking at the area where life starts and ends, we don’t find clear-cut dividing lines, and therefore we can’t isolate and define life in any objective way. The concept of “life” becomes meaningless when you require an objective definition. It is meaningful only if one accepts (or better: ignores) the impossibility of a precise definition. The Essentialism Fallacy thus finds fertile ground to show all its nastiness. Most people ignores the impossibility of defining life, and would readily use the concept even where it is indeed uncertain, namely at the beginning and end of life boundaries. This leads to mistakes, horrors and absurdities, where in the name of life we may prolong the agony of quasi-dead bodies (in some cases it’s certainly equivalent of “saving” a drop of blood by cultivating it on a petri dish), or arbitrarily decide that a zygote is a “person” giving to a single cell a disproportionate importance, and often forgetting about the fully formed person that carries it. As you can see, these latter cases are not trivial “armchair philosophy” subjects, they are real and do matter to all of us, but still, the general attitude is to approach them in essentialist terms, even if it should be clear to all that doing so is not only conceptually wrong, but also dangerous and counter-productive.

Not convinced? Think of race: we refer to people as Caucasian, Black, Brown and in a 1000 different ways, based on their external appearance. But it is a scientific fact that it’s impossible to establish clear boundaries of ethnicity, we are all “mixed” to some extent. In this case the blurred edges are so wide that they probably spread across all the possible ethnicity-space. And yet, we even base our policies on such ungrounded classifications. I am not saying that we shouldn’t: in some cases, even this absurd way of defining people could be useful, for better or for worse. The problem is that most of us are happily oblivious of the fact that ethnicity-based definitions are amongst crudest simplifications possible, and should be treated as such. Instead, many human beings are happy to think of races as defined by one essence or the other, a process that sustains (I wouldn’t go so far as saying that it generates it) racism and all the horrors that follow it.

The same dangers apply to most moral considerations: we judge people and events based on broad and objectively undefinable classes, and then make decisions as if these classes were real. This generates an awful lot of harmful mistakes, and (following the usual pattern) we mostly don’t even notice. What I’m trying to say is that the Essentialism Fallacy is not an obscure and irrelevant intellectual trick: it is the source of some of the most consequential errors ever made. And it’s everywhere, it affects all of us, including scholars, scientists and philosophers, but what’s worse, it is ubiquitous amongst clerics, politicians and citizens.

The exceptions:

This is where my own thoughts become intriguing (to me, at least). What I have expounded above is a conceptual argument, it itself deals with symbols, and it is worth exploring it because it is (hopefully) a useful simplification. Therefore, one prediction is obvious: it can’t be an absolute truth, exceptions must exist. And they do. The most notable is maths, but in general, ideas themselves may not be susceptible to Essentialist errors. Let me clarify: if I am trying to build an understanding of how having a particular idea changes and influences my own thoughts, the subject of my reasoning is already made of ECs. This means that for once, my reasoning can deal with its subject by, at least theoretically, dealing with the real subject itself, and not just a symbol of it. In theory, if I’m thinking about ECs, I’m thinking about the only kind of constructs that really have (and are defined by) an essence. The result is that when thinking about concepts, one can (at least theoretically) establish absolute truths, and this is radically different from “finding useful simplifications”. The typical example is Mathematics: because all of it applies exclusively to abstract concepts, there are plenty of absolute truths to be found. Two plus two does equal four and there are no exceptions. The same applies to the process of evaluating different and competing theories (or systems of “useful simplifications”): I can, without doubt, conclude that the flat earth idea is less precise that the approximation of the world as a sphere, both are wrong, but the latter is less wrong. The same applies with creationism and evolution: there can be no doubt that the theory of evolution is a better approximation of the truth than creationism. In both cases, I can claim that the conclusion is objective because it pertains concepts that are themselves made of equivalence classes.

If this isn’t confusing enough, the consequences of this line of thought are even more puzzling. Consider the glass mentioned above: it is man-made, and is the result of EC-driven thoughts; it was made in a particular form, shape and material because its designer did have an idea of what a glass should be: at least a part of what all drinking glasses are is the result of essentialist reasoning. The first consequence is that the most useful EC that I can apply to the actual glass is indeed closely related to the original idea of what a glass is. The second, and more intriguing consequence is that, in some sense, giving weight to the “glassness” class, and consequently less importance to the actual object, in this case is less of a fallacy (but not quite objectively right: remember, we are dealing with blurred boundaries). Because the glass was created with the intent of instantiating a token of the conceptual “glass” class, one could say that it does indeed have an essential side. This is true, but understanding manufactured objects exclusively in terms of their (intended) essence is still denying (or brushing aside) their own peculiar physicality. The end result is another blurred edge: designed objects do have, to some variable extent, an essence, and are therefore somewhat less vulnerable to the fallacy, but they are still physical constructs, and as such they have also some qualities that can never be fully described in essentialist terms. The more conceptual an object is (I’m thinking, for example of software, but books and fiction apply as well), the more susceptible to essentialist analysis it will be.

Conclusions

The consequences of this line of thought are difficult to grasp, I expect to keep mumbling about them for quite a while. For my own purposes, the following observation is paramount: if you conclude that I drank some water because I was thirsty, you may be right or wrong, but the underlying question is meaningful. On the other hand, if you conclude that the glass fell because of gravity, you are already prey of the Essentialism Fallacy: was it gravity or the curvature of space-time? This latter question is meaningless, because both options are “useful simplifications” and not reality itself. One could and should reformulate the question, and ask “which account of the fall is more useful?” instead. It may be a subtle difference, but it’s very important when one tries to understand reality: the epistemological limits of what can really be understood, and reflexively, a correct understanding of what “understanding reality” really means, are of paramount importance to all scientific endeavours.

Therefore, the general conclusion is that all intellectual efforts should be well aware of the Essentialism Fallacy, and should adjust their methods and claims according to the varying degree to which the fallacy applies to their specific domain. Failing to do so may (and frequently does) lead to catastrophic errors, and some of them have indeed contributed to the darkest moments of human history.