In terms of the major models in physics and cosmology, the simulation hypothesis, digital mechanics, and computation-based interpretations have endured and indeed grown in popularity over the last few decades, making them major contenders to the scientific dialogue. Why simulation and computation make for such powerful analogies is that they offer a useful framework for approaching the paradoxes found throughout quantum mechanics. If quantum phenomena behave frequently as if spacetime is totally moot, that would make sense if our universe is something like a probability-based simulation, a computer, a virtual reality, or a video game. VR would account for the time-tested problems of entanglement, erasure, delayed-choice, particle-wave duality, retrocausality, and all the other queer observations in quantum studies that Einstein announced were far too “spooky” for him to swallow. Nevertheless, quantum mechanics remains the most mathematically precise framework of how natural phenomena act.

Some have argued that this trendy computational-simulation angle is untestable, non-falsifiable, and thus, ultimately irrelevant. A useless novelty of our computer-plagued era. Well no more…

Former NASA nuclear physicist Thomas Campbell and others have published a paper in the International Journal of Quantum Foundations called “On testing the simulation hypothesis” (http://www.ijqf.org/wps/wp-content/uploads/2017/03/IJQF-3888.pdf ). It presents “conceptual wave/particle duality experiments aimed at testing the simulation hypothesis.”

They write:

Can the hypothesis that reality is a simulation be tested? We investigate this question based on the assumption that if the system performing the simulation is finite (i.e. has limited resources), then to achieve low computational complexity, such a system would, as in a video game, render content (reality) only at the moment that information becomes available for observation by a player and not at the moment of detection by a machine (that would be part of the simulation and whose detection would also be part of the internal computation performed by the Virtual Reality server before rendering content to the player).

Proposals for doable, realistic experiments for testing the boundaries of our would-be virtual reality universe is itself an achievement. The ease at which such experiments could be carried out is what makes Campbell and company’s paper so exciting. None of the experiments Campbell offers are particularly difficult or expensive to implement. They specify, “More precisely our hypothesis is that wave or particle duality patterns are not determined at the moment of detection but by the existence and availability of the which way data when the pattern is observed.” The paper offers a set of 6 entirely new quantum experiments that would clearly validate this new perspective.

The focus of this new paper is on the longtime debate of whether or not conscious observation plays a fundamental role (possibly an exclusive one) in collapsing the wave-function, or if environments and apparatuses collapse the wave-function as well:

What causes and determines the collapse of the wave function? Or in Virtual Reality (VR) terminology, what causes the virtual reality engine to compute and make information defining the VR available to an experimenter within the VR?

Is it

(I) entirely determined by the experimental/detection set-up?

(II) or does the observer play a critical role in the outcome?

Under the simulation hypothesis, these questions can be analyzed based on the idea that a good/effective VR would operate based on two, possibly conflicting, requirements: (1) preserving the consistency of the VR (2) avoiding detection (from the players that they are in a VR). However, the resolution of such a conflict would be limited by computational resources, bounds on computational complexity, the granularity of the VR being rendered and logical constraints on how inconsistencies can be resolved. Occasionally, conflicts that were unresolvable would lead to VR indicators and discontinuities (such as the wave/particle duality).

In 2015, the nearly century-long hot debate of whether or not conscious measurement collapses the wave function (https://plato.stanford.edu/entries/qm-collapse/#ProbTailWaveFunc) became positively smoldering with the execution of a delayed-choice thought-experiment originally proposed by John Archibald Wheeler, and carried out at The Australian National University ( http://www.nature.com/nphys/journal/v11/n7/full/nphys3343.html ). Associate Professor Andrew Truscott from the ANU Research School of Physics and Engineering said flatly, “It proves that measurement is everything (https://www.sciencedaily.com/releases/2015/05/150527103110.htm). At the quantum level, reality does not exist if you are not looking at it.” Things do not really exist as we tend to imagine them to exist; they only seem to appear to exist—and they appear to exist only in the presence of an observer. Similar to the experiment performed at ANU, Campbell and company’s “first test is based on a modification of the delayed choice quantum eraser.”

If Campbell and company’s experiments prove that “the observer play[s] a critical role in the outcome,” as the ANU work has, then several important readings fall out from there. One being that Wheeler’s intuition was correct; the language of “observer-participants” is in fact a spot-on reading of the role of conscious measurement in relation to a quantum system. Wheeler—a heavyweight in 20th century physics—brought the world everything from black holes, to quantum foam, to an information theoretic reading of physics, widely known by his slogan It from Bit ( http://cqi.inf.usi.ch/qic/wheeler.pdf ).

Conceptually speaking, an observer-participant is only a stone’s throw from the mechanics of a player-avatar in a video game.

Although deeply counter-intuitive to the assumptions of determinism, materialism, and realism—what we are taught in high school science classes—the best way to comprehend how it could be that the universe only exists when you look at it can be found in how players and avatars interact within video game maps. The avatar only seems to move through the level—but it is an illusion. The game’s spacetime effects are ultimately just demonstrations that are provided for the player-avatar by a computer. When you play a video game, every level and detail is loaded and rendered only as needed. That’s how you can have phenomena that do indeed appear to occur within a spacetime universe, and paradoxically, also have that spacetime behave like it can be both real and moot at different scales. Campbell and company are banking on a similar outcome in their experiments. They write:

It is also now well understood, in the domain of game development, that low computational complexity requires rendering/displaying content only when observed by a player. Recent games, such as No-Man’s Sky and Boundless, have shown that vast open universes (potentially including “over 18 quintillion planets with their own sets of flora and fauna”) by creating content, only at the moment the corresponding information becomes available for observation by a player, through randomized generation techniques (such as procedural generation). Therefore to minimize computational complexity in the simulation hypothesis, the system performing the simulation would render reality only at the moment the corresponding information becomes available for observation by a conscious observer (a player), and the resolution/granularity of the rendering would be adjusted to the level of perception of the observer. More precisely, using such techniques, the complexity of simulation would not be constrained by the apparent size of the universe or an underlying pre-determined mesh/grid size but by the number of players and the resolution of the information made available for observation.

Nick Bostrom articulated the idea elegantly in his now famous paper Are you Living in a Computer Simulation? (http://www.simulation-argument.com/simulation.html):

Simulating the entire universe down to the quantum level is obviously infeasible, unless radically new physics is discovered. But in order to get a realistic simulation of human experience, much less is needed – only whatever is required to ensure that the simulated humans, interacting in normal human ways with their simulated environment, don’t notice any irregularities. The microscopic structure of the inside of the Earth can be safely omitted. Distant astronomical objects can have highly compressed representations: verisimilitude need extend to the narrow band of properties that we can observe from our planet or solar system spacecraft. On the surface of Earth, macroscopic objects in inhabited areas may need to be continuously simulated, but microscopic phenomena could likely be filled in ad hoc. What you see through an electron microscope needs to look unsuspicious, but you usually have no way of confirming its coherence with unobserved parts of the microscopic world. Exceptions arise when we deliberately design systems to harness unobserved microscopic phenomena that operate in accordance with known principles to get results that we are able to independently verify.

And, in a truly sci-fi twist, Bostrom adds:

Moreover, a posthuman simulator would have enough computing power to keep track of the detailed belief-states in all human brains at all times. Therefore, when it saw that a human was about to make an observation of the microscopic world, it could fill in sufficient detail in the simulation in the appropriate domain on an as-needed basis. Should any error occur, the director could easily edit the states of any brains that have become aware of an anomaly before it spoils the simulation. Alternatively, the director could skip back a few seconds and rerun the simulation in a way that avoids the problem.

The universe is assumed to be a massive size and a massive amount of information. How is it that the universe could be so vast, or appear to be so vast? What could possibly support or cause such complexity, diversity, and life? Well in someways it has already been done in video games. If this universe is a computer simulation, it makes sense that only what an observer-participant requests is all that needs to exist at any given moment. This would save computing cycles for whatever computational meta-system that supports our simulated universe. It also makes sense as to why all the matter, energy, and the many laws and constants of physics (some of them tuned to mind-bogglingly fine precision) would appear all at one, at the same time, from apparently no where, for apparently no reason—The simulation went GO on day.

At the end of his life Einstein concluded, “Space does not have an independent existence.” That means the stuff of the world isn’t as objective and independent as it looks. We are not like little cameras looking at a solid world out there at all. In fact, it only appears objective and solid after you’ve looked. Maybe, its very existence is dependent upon being looked at. When we don’t look it exists only as a probabilistic, statistical potential—an unrendered possibility. So without conscious players there would be no game and without the game there would be no players. They are one and the same—wholly dependent upon each other to exist. Hence Einstein’s tricky comment above.

Einstein also said, “It is clear that the space of physics is not in the last analysis anything given in nature or independent of human thought. It is a function of our conceptual scheme.” Campbell summarized this sentiment from Einstein, saying, “Space is a function of mind.”

David Bohm, another big man of science and Einstein's colleague said:

“To meet the challenge before us, our notions of cosmology and the general nature of reality must have room in them to permit a consist account of consciousness. Visa versa our notions of consciousness must have room in them to understand what it means for its content to be ‘reality as a whole.’ The two sets of notions together should then be such as to allow for an understanding as to how consciousness and reality are related.”

These odd, almost wooey statements about consciousness and observation are far from unusual when reading the essays and letters of some of the most influential minds behind cosmology and quantum mechanics. Nobel Prize Winner Eugene Wigner said, “It will remain remarkable in whatever way our future concepts may develop that the very study of the external world led to the scientific conclusion that the content of the consciousness is the ultimate universal reality.”

Similarly, Andrei Linde, who accurately predicted the discovery gravitational waves, and one of the heavyweights behind inflationary theories, stated in a 2002 Discover Magazine (http://discovermagazine.com/2002/jun/featuniverse) piece:

"The universe and the observer exist as a pair […] You can say that the universe is there only when there is an observer who can say, Yes, I see the universe there. These small words — it looks like it was here— for practical purposes it may not matter much, but for me as a human being, I do not know any sense in which I could claim that the universe is here in the absence of observers. We are together, the universe and us. The moment you say that the universe exists without any observers, I cannot make any sense out of that. I cannot imagine a consistent theory of everything that ignores consciousness. A recording device cannot play the role of an observer, because who will read what is written on this recording device? In order for us to see that something happens, and say to one another that something happens, you need to have a universe, you need to have a recording device, and you need to have us. It's not enough for the information to be stored somewhere, completely inaccessible to anybody. It's necessary for somebody to look at it. You need an observer who looks at the universe. In the absence of observers, our universe is dead.”

How to wrap your head around all of this? Philosopher Terence McKenna had an excellent summary of the reading of conscious-observer based wave collapse idea; “Mind is necessary for the universe to undergo the formality of exiting.” If observers do indeed render reality, then much of our entire Western physics, philosophy, and metaphysics paradigm is totally shattered. That is why many scientists during the time of Einstein, faced with the same strange experimental results that we are still wrestling with today, made comments along the lines of those above. Max Plank, the godfather of quantum mechanics, said, “We must assume behind this force the existence of a conscious and intelligent Mind. This Mind is the matrix of all matter.” And, “[I]n the last analysis, we ourselves are part of nature and therefore part of the mystery that we are trying to solve.”

But what if the mind Plank refers to is not a perfect deity per se, but rather an imperfect, yet evolving conscious computer? One that’s crunching out VR universes, physics, and conscious players.

If Campbell turns out to be correct and we are in a computer simulation, then there is indeed an answer to Einstein’s famous assertion, “Do you really believe the moon isn’t there when you aren’t looking?”

The answer is emphatically, “There is no moon.”

- International Journal of Quantum Foundations, On testing the simulation hypothesis ( http://www.ijqf.org/wps/wp-content/uploads/2017/03/IJQF-3888.pdf )

- On simulation paradoxes and “There is no moon”, Breaking into the Simulated Universe (https://ieet.org/index.php/IEET2/more/Edge20161030)