“I’m in agony!”

“Do you want a doctor?”

“No, I want a philosopher!”

– David Pearce, expressed in bafflement over the eliminativist view of consciousness.



-

What Does Consciousness Realism Entail?

This stands in contrast to the eliminativist view, which holds that consciousness is not real, and therefore that the existence and character of conscious states are not facts about the world. When someone is in agony, this is not an objective fact independent of outside observers. As my eliminativist friend Brian Tomasik puts it: “We're not conscious but only think we are.”

Yet the notion that consciousness is an illusion is among the last things that can be made any sense of, at least for a consciousness realist. As David Pearce has noted:

“Let’s say that we undergo merely illusory agony, hear illusory melodies and feel illusory jealousy, and so forth. On an ontology of eliminativist materialism, there shouldn’t be any of this illusory phenomenal “seeming” either. A seeming oasis in the desert may be turn out to be a mirage; but the mirage itself isn’t illusory.”

Eliminativists like Brian Tomasik don’t seem swayed by such arguments, however:

“No thought you have is guaranteed to be free from bugs, and it seems more likely – given the basically useless additional complexity of postulating a metaphysically privileged thing called consciousness – to suppose that our attribution of metaphysically privileged consciousness to ourselves is a bug in our cognitive architectures.”

To which one might respond that, first, the meaning of the term “metaphysically privileged” is not obvious and would seem in need of unpacking. To be clear, saying that consciousness exists is not to say that consciousness is “metaphysically privileged” in any sense. Second, the existence of consciousness is the thing in need of explanation, not something additional we postulate. Indeed, if we reasoned like this in all domains, we might as well throw every phenomenon of the world away, including gravity, climate change, and protein folding, and claim that there is nothing whatsoever to be explained, as that is much simpler. There is no reason to postulate the existence of these things when their absence would seem to leave us with a much more appealing ontology.

The distinction between phenomena to be explained and postulates to explain with is a crucial one. Features of the world are not mere assumptions we can simply discard, and consciousness, the realist position holds, is indeed a feature of the world. As David Pearce notes: “If their theory of the world has no place for your first-person experience of agony, then so much the worse for their theory of the world.”

The Physical Basis of Different States of Consciousness

A good hint at the answer to this question comes from our own direct experience coupled with the fact of evolution. The hint is that the character of our experience shows every sign of being the product of evolution. We feel pain when our body is harmed, and suffer when we are starving, while we enjoy food, find comfort in being safe, and find great pleasure in sex. In short, the things that threaten our existence generally feel unpleasant, while that which helps us survive and pass on our genes gives us pleasure. This may seem to say precious little about the physical basis of our experience, yet only if we fail to keep in mind that evolution is an entirely physical process – a delicate unfolding of self-replicating organic molecules. When we combine these two facts, what follows is that our conscious mind has arisen through the interplay of complex organic molecules.

The significance of this reasoning is not small, because the fact that the character of our conscious experience shows every sign of being the product of evolution, like we know hearts and lungs are, gives us good reason to think that, like hearts and lungs, conscious states such as suffering and well-being are not simple, intrinsic features of the universe at a fundamental level, but rather complex physical phenomena.

The same conclusion can be derived from our more direct studies of the physical basis of conscious experience – i.e. neuroscience, in particular neuro-electrode studies – which have revealed that conscious states depend upon certain kinds of brain activity. Different experiences, such as the experiences of joy and pain, correlate with different patterns of activity that take place in certain structures of the brain. And while this fact obviously does not logically exclude that complex composite experiences may arise in systems that do not resemble brains in the least, it does, however, provide good reason to be highly skeptical that they do. For the fact that all aspects of our experience reliably depend upon specific and highly complex physical processes does render it unreasonable to believe that, say, simple unorganized systems such as rocks or plants should give rise to complex unitary experiences of any kind.

On the realist view of consciousness, we have a large research project ahead of us: uncovering the physical basis of consciousness beyond this most basic level of understanding. On an anti-realist view such as that of Brian Tomasik, on the other hand, there is no empirical question to be answered about the physical basis of conscious experience, as there are no real facts to be discovered in the first place. To the extent there is a project ahead of a non-realist, the project is merely to decide what one wants to consider conscious – if anything at all. This is a significant difference in the practical implications of the two views.

Corollaries of Physicalist Monism

So on any physicalist monist view, our conscious states are certain physical states, and if we combine this view with the evolutionary origin of our conscious mind, what follows is that these certain physical states have been selected for over the course of evolution. For instance, there is nothing inherently suffering-triggering about a functioning body and brain that is not getting food, or even getting eaten alive; but over the course of evolution, such events, or the mere threat of such things, have been tied to certain physical states – unpleasant experiences – because such experiences, as we know from our own experience, strongly encourage their avoidance. This separability of noxious stimuli and suffering is also evident from the fact that certain genetic disorders, such as nonsense mutations in the SCN9A gene, cause a complete inability to experience physical pain. In such cases, the physical structures that mediate1 the experience of pain, and which have emerged as an adaptive feature of brains over the course of evolution, do not to develop. Physical pain cannot be felt.

One may object that this seems weird. The suggestion that certain complex physical states should be identical to conscious states such as suffering and pleasure, and that evolution has then stumbled upon and taken advantage of these, can almost seem to suggest that the world was “built” for such conscious states to emerge – a new promising path along which theologians can exercise their unmatched skills of motivated reasoning perhaps?

Not quite. The weirdness is not as great as it may seem at first sight, the reason being the competence of evolution. For consider by analogy the emergence of the human hand. The fact that the human hand is a physical structure that can exist in the universe, and, on top of this, that evolution has then stumbled upon this structure, can seem incredible as well, yet only until we realize how evolution works. Through many small steps of non-random selection, this functional design has emerged. Similarly, given an identity between the physical and the experiential, it is not particularly weird that specific, unlikely conscious states have emerged over the course of natural selection. A pair of humans hands is no less of a specific or unlikely state for matter to be organized in than the human mind/brain, and the emergence of the latter is no more miraculous than the former.

An Example of Eliminativist Interpretation: Consciousness as Computation

Yet the view that consciousness is identical to computation seems popular nonetheless, and an intuition pump that may cause us to increase our estimated plausibility of this “consciousness as computation” view is the fact that brains are objects with a lot of tiny components that send electrical signals to each other, and modern computers are objects with a lot of tiny components that send electrical signals to each other. So why say the one is conscious while the other isn’t?

This intuition pump is confusing, however, as it can cause us to lose sight of the more general notion of a computer, and what a computer in fact is. For the general concept of a computer has nothing to do with tiny components sending electrical signals to each other per se. Indeed, one could construct a computer out of billiard balls (or, unethically, crabs), and on a computational view, such a system of systematically bouncing billiard balls, if they are made to bounce in a way that models the “information processing” of a live brain to a sufficient degree (and clarifying the meaning of “sufficient degree” is anything but trivial in this context), will undergo the same conscious experiences. If one thinks conscious experiences exist at all, that is.

Brought into this different context – bouncing balls rather than tiny components sending electrical signals – most people would probably be more skeptical of the claim that computers are conscious. How could separate bouncing billiard balls, no matter how they bounce, possibly give rise to a unitary experience?

Yet there is actually no fundamental difference between the bouncing billiard ball computer and the electrical silicon computer. In both cases, we have no unified physical whole that can, to my mind, conceivably be identified with a unitary conscious state. For instance, how can a Turing machine that performs one calculation at a time – for instance, one calculation a month – possibly give rise to a unitary experience of, say, the sight of a beach, the warmth of the sun, and the sound of the latest Bieber hit? This seems inconceivable to me, which of course does not prove it impossible.

More generally, there is no guarantee that a simulation of something, no matter how much information it includes of that something, will have the same properties as the thing being simulated. For example, a computer simulation of a gas under high pressure does not actually contain any high pressure itself, and no matter how much information we gradually include in such a simulation, there is no point at which it will start containing high pressure, nor any point at which actual gas molecules will start emerging. Might conscious states be similar to physical pressure and gas molecules in this sense? I suspect it is, although I cannot say for certain, and I maintain that nobody can say with confidence that it isn’t either.

A computer simulation of a brain will never be an actual physical brain, and how can we assert that this is not required in order to bring about conscious states like our own? How can we say that reproducing our conscious experience does not require a concrete microfunctionalist copy of our brains, as opposed to an abstract macrofunctionalist one, such as a movie or a computer simulation of brain activity?3

Why Our View of Consciousness Matters

On the realist view of consciousness, there is, as mentioned, a research project ahead of us: identifying the physical basis of conscious states. On the anti-realist view, we can believe whatever we want about consciousness without the risk of being wrong; and similarly do whatever we want without the risk of causing any real suffering, since there really is no such thing. Nobody, on the eliminativist view, is wrong to claim that what you call suffering is actually happiness or an unconscious state. A view behind which there may be no ill intend, yet which, if taken seriously, can have atrocious consequences nonetheless.

If we do not get our view of consciousness right – more specifically, if we do not understand the basis of sentience – our efforts to alleviate suffering and create a better world risk being catastrophically misconceived. In order to navigate well in this world, we need an accurate map of reality to navigate by. We need to understand the physical basis of consciousness. And in order to do that, we need to acknowledge its existence in the first place.4

* * *

Appendix: Two Quotes and Two Comments

Below are two quotes from Brian Tomasik’s The Eliminativist Approach to Consciousness accompanied with brief comments of mine that further clarify a couple of points.

“Our perception of being conscious, just like our perception of anything else, is a hypothesis that our brain constructs, based on very complicated processing and lower-level thinking, expressed in terms of a simplified ontology that the brain can make sense of.”

I agree that any belief is such a hypothesis – including the belief that the Earth is not flat – yet that does not in any way make a case for the reasonableness of doubting any such a belief. Second, we agree that the brain “constructs” hypotheses and tests them, yet that is not inconsistent with the direct experiential account: “I consciously observe and conclude”. On any monist view, these are ultimately descriptions of the same thing.

“"Knowing that I'm conscious" is not a thought that somehow transcends ordinary brain machinery, nor does it deserve to be made axiomatic in one's ontology.”

It is not so much that this knowledge is made axiomatic, but rather that it is observed and found to be as undeniable as, say, any principle of logical deduction. Again, in order to deny the existence of consciousness, one must rely on its very existence, which renders the denial of consciousness as incoherent as square circles. At the level of my beliefs, I don’t consider the existence of my own consciousness more certain than the non-existence, indeed meaninglessness, of square circles. So I actually agree with the main point in the passage this is quoted from, namely that the existence of one’s own consciousness is not more certain than all other claims. I think it is merely as certain as anything else that we cannot meaningfully doubt.

Notes

1. One should be careful with language here, however. As David Pearce notes: “Identity is not a causal relationship. We can't simultaneously claim that a conscious state is identical with a brain state and maintain that this brain state causes (or "generates", or "gives rise to" etc) the conscious state in question.”

2. According to Jakob Grue Simonsen (personal communication), professor of computer science at University of Copenhagen.

3. This question of course assumes that conscious experiences do indeed exist.

4. Small parts of this essay have previously been published in the book A Copernican Revolution in Ethics.

Additional Resources

Abolitionist.com

Physicalism.com

Cosmic Consciousness for Tough Minds (A review of David Chalmers’ ‘The Conscious Mind’)

Organic VR (Terminological note for philosophers)

Should We Eliminate The Human Ability to Feel Pain?

Magnus Vinding’s Disagreements with Brian Tomasik