So, Elon Musk, the Silicon Valley entrepreneur and CEO of Tesla Motors, thinks we’re all living in a computer simulation.

Here’s a devastating counterargument to Musk. This can’t all be a video game, since in a video game no one would care about the Model 3.

Ask yourself: When’s the last time you commandeered a hybrid in Grand Theft Auto?

But let’s treat Musk’s beliefs a bit more seriously. Here he is at Recode’s Code Conference 2016 explaining his views:

The strongest argument for us being in a simulation probably is the following. 40 years ago we had pong. Like, two rectangles and a dot. That was what games were. Now, 40 years later, we have photorealistic, 3D simulations with millions of people playing simultaneously, and it’s getting better every year. Soon we’ll have virtual reality, augmented reality. If you assume any rate of improvement at all, then the games will become indistinguishable from reality, even if that rate of advancement drops by a thousand from what it is now. Then you just say, okay, let’s imagine it’s 10,000 years in the future, which is nothing on the evolutionary scale. So given that we’re clearly on a trajectory to have games that are indistinguishable from reality, and those games could be played on any set-top box or on a PC or whatever, and there would probably be billions of such computers or set-top boxes, it would seem to follow that the odds that we’re in base reality is one in billions. Tell me what’s wrong with that argument. Is there a flaw in that argument?

Well, let’s see.

In 2003, Oxford philosopher Nick Bostrom published the provocatively titled “Are You Living In A Computer Simulation?” in one of the field’s most respected academic journals.

Bostrom’s thesis, while less philosophically assertive than Musk’s argument, is stunning. Either:

(a) there won’t be any (or nearly any) civilizations that reach the point of being able to run simulations indistinguishable from reality as we know it, or

(b) no civilization that reaches such a point will have an interest in running simulations of this kind, or

(c) we are nearly certainly living in a computer simulation.

Bostrom thinks these possibilities are equiprobable — what’s different about Musk is that he’s fairly certain C is true.

Bostrom thinks that barring some worldwide catastrophe, it’s safe to assume human beings will achieve the technological prowess needed to run simulations indistinguishable from “base reality” (i.e., non-simulated reality). Even if they do, however, it could be that an agreement among societies is put in place disallowing any consciousness-replicating simulations to be run. Bostrom thinks we have no reason to think any of these three possibilities — worldwide catastrophe, worldwide refusal, and simulation success — are any likelier than the others. Musk thinks the last option is so likely to be true that the probability that this world, our world, is not a simulation is only one in one billion.

Bostrom’s argument is one of the more recent iterations of reality-challenging thought-experiments that have enjoyed a storied history in philosophy. These thought-experiments — think of them as argument-advancing sci-fi scenarios — are not always deployed toward the same goal. A subset of them are used to explore questions in metaphysics (The Ship of Theseus; Leibniz’s Mill Argument), another subset are devoted to epistemological problems (The Cartesian Demon; Brains in a Vat), yet another subset explore ethical questions (Nozick’s Experience Machine; Thomson’s Famous Violinist). Some, such as Rawls’ hypothetical contract scenario — which I wrote about below — explore political considerations.

While Bostrom’s thought experiment shares some similarities with these other arguments, it is almost certainly a mistake to see it as “just the latest version” of the Cartesian Demon argument, as Vox’s David Roberts has recently claimed.

I point this out because delineating the differences between the two will help us see that Musk is making a spectacularly stronger claim.

The Cartesian Demon argument comes to us from the famous French philosopher Rene Descartes, whose Meditations, released in the middle of the 17th century, went on to become one of the most significant philosophical works of all time.

Descartes’ goal was to establish knowledge on a firm foundation — the scientific revolution was underway, and Descartes wanted to take the time to explore whether human pretentions to knowledge were well-founded or whether there is some defect in our knowledge-acquisition mechanism that should keep us from being sure of things.

Think about it this way: if you’re trusting that your significant other is being faithful to you because a mutual friend, who is in a position to know, assures you she is, yet you later find out that your friend is a pathological liar, you might be in big trouble. Or consider the revelation that a DNA laboratory has not maintained standards necessary for accurate results. Whatever conclusions once drawn based on results from this lab would now be called into question. Descartes is trying to ensure that there is no problem of this sort at the source of our knowledge. If there is, it would call everything into question.

So he goes in search of a principle or a process which he cannot doubt. That’s the only way that knowledge can have a firm foundation, he reasons.

So he first considers the principle: My senses are perfectly reliable.

Can he doubt this? Well, yes.

Put a stick under water, and your senses tell you it’s bent. Since this principle can be doubted, it cannot play the foundational role Descartes is looking for.

He next considers a modified version of the same principle: In ideal conditions, my senses are perfectly reliable.

The reason for the modification is this: water is a distorting agent. Yet what happens when we’re in a situation in which there is no such interference? How about sitting in your living room, next to the fireplace, holding a piece of paper? There is nothing running interference against your senses there. Surely you are really sitting there and really holding the piece of paper.

Yet even this can be doubted. You could be dreaming. If so, then you’re not really sitting; you’re planked on your bed. And you’re not really holding a piece of paper; you’re clutching the edge of your pillow. (For a movie-long thought experiment about this, see Inception; or see the short-lived TV show Awake.)

But isn’t it the case that whether we’re asleep or awake, certain mathematical and scientific claims remain true? I am thinking of the claim that a triangle has three sides. Or that 1 + 1 = 2.

If so, then the principle Descartes has been looking for is finally here: My reasoning-process is reliable.

Notice that the senses have been abandoned as the possible foundation of human knowledge — they’re too unreliable. Descartes has shifted to the mind’s rational capacities independent of senses.

Yet here’s where the evil demon comes in: Descartes says he cannot rule out that there is an evil demon systematically deceiving him. Descartes thinks he is reasoning correctly, but all along a demon could be tricking him.

Finally, a breakthrough. Even if there is a demon that is tricking him, he is still thinking, and this means he exists.

So Descartes arrives at what he takes to be the foundational truth, the certainty which can set up all of human knowledge: cogito, ergo sum (I think, therefore I am).

No matter what the demon tries to do, no matter how deep into an REM-cycle Descartes happens to be in, no matter how unreliable his senses are, the undeniable truth is this: if he is thinking, then that means he exists.

But notice what Descartes is not claiming. He is not claiming the evil demon exists. He is recognizing that because there is some possibility, however remote, that such a demon exists, he cannot be certain that his reasoning is perfectly reliable. As a Catholic, he doesn’t for a second actually affirm the existence of an all-powerful, malevolent demon who has constructed a cosmic playhouse to sadistically terrorize his creation.

Musk is doing the opposite: he’s claiming the demon exists. Not the demon per se, but a race advanced enough to be able to run the sorts of simulations that contain conscious subjects suffering mass delusions about reality.

You might say: “But life just is a bunch of conscious subjects suffering mass delusions about reality.”

Right, but Musk is really, really, really sure that this is due to an advanced version of us running a computer program. He’s not, like Descartes, simply finding himself unable to rule something out — which is a result borne of epistemic humility. Rather, he’s fairly certain we’re basically all just Sims characters.

And here’s the problem: there is a universe-sized metaphysical gap between ruling out and ruling in — I can’t rule out that invisible fairies are currently swirling around the room, but I have very little reason to rule it in. I can’t rule out that I am the only thing that exists, but I have very little reason to rule it in. These are reasonable positions to take. Musk, however, rules in that we’re all characters in a video game.

To do this, Musk has to build in lots of assumptions in his argument. For one, he has to believe that alternative metaphysical narratives are exceedingly improbable. While theism isn’t incompatible with Bostrom’s simulation thesis, the latter doesn’t fit in all that well within the framework offered by any of the great monotheistic traditions. Under these systems, it seems highly unlikely God would allow his creation to reach a point where their creative powers are functionally equivalent to his own. Bostrom has to assume that theism of this sort — Christianity, Judaism, Islam — is so unfathomably improbable that there is close to no chance it’s true.

Musk’s proposal reminds me of a famous rebuttal to an argument for God’s existence. In a response to the teleological argument, which infers from the world’s apparent complexity the existence of God, the Scottish philosopher David Hume once suggested that maybe the intelligent Designer is really a committee of designers. This is basically what Musk believes, except for one crucial difference: Hume proposed a divine design committee, whereas Musk has in mind an advanced civilization made up of people like us.

Perhaps it’s no surprise that Musk, a herald of technological excitement, would believe in the nearly limitless human capacity for technomastery.

Yet here’s another huge assumption Musk has to build in: Musk has to believe human beings will arrive at a point where consciousness — full-version consciousness, not a cheap imitation — can be replicated within a simulation.

But consciousness remains a mystery for a reason. Musk not only believes future civilizations will solve what David Chalmers famously called “the hard problem of consciousness,” but that their understanding of consciousness will be so total they’ll be able to effortlessly replicate it without limit.

This strikes me as philosophically naive.

Here is how Chalmers puts it:

The easy problems of consciousness are those that seem directly susceptible to the standard methods of cognitive science, whereby a phenomenon is explained in terms of computational or neural mechanisms. The hard problems are those that seem to resist those methods.

Here are examples of conscious activity that would fall under “the easy problem”:

the ability to discriminate, categorize, and react to environmental stimuli

the integration of information by a cognitive system

the reportability of mental states

the ability of a system to access its own internal states

the focus of attention

the deliberate control of behavior

the difference between wakefulness and sleep

Though we don’t currently have fully fleshed out theories concerning each of the above phenomena, it’s not hard to see how we might arrive at complete explanations at some point in the future. A difficulty of a totally different character — because it may ultimately be intractable — is the hard problem of consciousness: the problem of experience.

Chalmers:

When we think and perceive, there is a whir of information-processing, but there is also a subjective aspect. This subjective aspect is experience. When we see, for example, we experience visual sensations: the felt quality of redness, the experience of dark and light, the quality of depth in a visual field. Other experiences go along with perception in different modalities: the sound of a clarinet, the smell of mothballs. Then there are bodily sensations, from pains to orgasms; mental images that are conjured up internally; the felt quality of emotion, and the experience of a stream of conscious thought…. Why doesn’t all this information-processing go on “in the dark”, free of any inner feel? Why is it that when electromagnetic waveforms impinge on a retina and are discriminated and categorized by a visual system, this discrimination and categorization is experienced as a sensation of vivid red? We know that conscious experience does arise when these functions are performed, but the very fact that it arises is the central mystery.

So Musk has now made two really big assumptions: (1) alternative, simulation-incompatible or simulation-improbable metaphysical narratives are all exceedingly unlikely and (2) consciousness, a problem that some philosophers think is insoluble, will be understood to the point of being able to infuse simulations with it.

But leave aside for the moment the technical complications associated with generating simulated consciousness and consider the moral question of why our designers would invest us with it.

Conscious experience is that vivid, first-person point of view made up of sensations of pleasure, pain, and much else. The philosopher Ned Block makes a distinction between Access Consciousness and Phenomenal Consciousness, with AC having to do with mental states that are available to other mental states in order to guide action, and PC having to do with that vivid, first-person view described above. Here’s the crucial part: Block thinks it’s possible, in theory, for us to function exactly as we do now with only A-Consciousness.

It would be like walking into a room and sidestepping a table, rather than running into it, as I walk across the room, even though everything in my head is “dark” and my mind is “blind.” Neurotransmitters could receive signals from optic nerves about a table being in the way without there being any movie-like experiencing of the room, or the table, or walking, or anything.

If this is true, then why are we conscious?

If our transhuman overlords are interested in running simulations in order to learn things about humanity, why not just run the simulations with characters imbued with only Access Consciousness? If Block is right and Phenomenal Consciousness is superfluous, then the fact that we have it suggests our designers wanted us to feel, even though they could have learned exactly the same information without imbuing us with it. Thanks for the capacity for pain, guys!

I’m a theist. If I were to adopt Musk’s categories, I would say that God is the designer and he’s run just one “simulation,” even though he’s considered all logically possible simulations in his mind. Within this framework, consciousness makes lots of sense. There is a richness that it furnishes us with, even if, strictly speaking, it’s a metaphysical add-on that we could have done entirely without.

If you want to project the impact of a hurricane, you can build a model that replicates everything necessary for understanding its potential impact. Yet the hurricane in the model wouldn’t need to be really wet. And the houses it devastates wouldn’t need to be real. Here’s the crucial aspect for our purposes: we are stipulating that the model furnishes us with the same exact information that we’d receive from running a “real life” version.

If we wanted to, we could build a model that includes real people and real houses being demolished. But then what would that say about us?