At some point we’ll be able to run a computer simulation that contains self-aware entities. In this piece I’m not going to worry about little details such as how to tell if a simulated entity is self-aware or whether it’s even possible to run such a simulation. The goal, rather, is to look into some philosophical problems posed by simulations.

A computer simulation takes a model in some initial configuration and evolves the state of the model through repeated programmatic application of rules. Usually we run a simulation in order to better understand the dynamics of some process that is hard to study analytically or experimentally. The straightforward way to implement a simulator is to represent the system state in some explicit fashion, and to explicitly run the rule set on every element of the state at every time step. It seems clear that if our simulation includes a self-aware entity, this sort of execution is sufficient to breathe life into the entity. But what about other implementation options?

First, our simulator might be mechanically optimized by a compiler or similar tool that would combine rules or elide rules in certain situations. For example, if the simulation state contains a large area of empty cells, the optimizer might be able to avoid running the rule set at all in that part of the space. Can the entity being simulated “feel” or otherwise notice the compiler optimizations? No — as long as the compiler is correct, its transformations are semantics preserving: they do not affect the computation being performed. Of course a suitable definition of “do not effect” has to be formulated; typically it involves defining a set of externally visible program behaviors such as interactions with the operating system and I/O devices.

(animation is from Wikipedia)

Compiler optimizations, however, are not the end of the story — algorithmic optimizations can be much more aggressive. I’ll use Conway’s Game of Life as the example. First a bit of background: Life is a two-state, rectangular cellular automaton governed by these rules (here I’m quoting from Wikipedia):

Any live cell with fewer than two live neighbors dies, as if caused by under-population.

Any live cell with two or three live neighbors lives on to the next generation.

Any live cell with more than three live neighbors dies, as if by overcrowding.

Any dead cell with exactly three live neighbors becomes a live cell, as if by reproduction.

Life has been shown to be Turing complete, so clearly self-aware entities can be encoded in a Life configuration if they can be algorithmically simulated at all. A straightforward implementation of Life maintains two copies of the Life grid, using one bit per cell; at every step the rules are applied to every cell of the old grid, with the results being placed into the new grid. At this point the old and new grids are swapped and the simulation continues.

Hashlife is a clever optimization that treats the Life grid as a hierarchy of quadtrees. By observing that the maximum speed of signal propagation in a Life configuration is one cell per step, it becomes possible to evolve squares of the Life grid multiple steps into the future using hash codes. Hashlife is amazing to watch: it starts out slow but as the hashtable fills up, it suddenly “explodes” into exponential progress. I recommend Golly. Hashlife is one of my ten all-time favorite algorithms.

The weird thing about Hashlife is that time progresses at different rates at different parts of the grid. In fact, two cells that are separated by distance n can be up to n time steps apart. Another interesting thing is that chunks of the grid may evolve forward by many steps without the intermediate steps being computed explicitly. Again we ask: can self-aware Life entities “feel” the optimization? It would seem that they cannot: since the rules of their universe are not being changed, their subjective experience cannot change. (The Hashlife example is from Hans Moravec’s Mind Children.)

If Hashlife is a viable technology for simulating self-aware entities, can we extend its logic to simply replace the initial simulation state with the final state, in cases where we can predict the final state? This would be possible for simulated universes that provably wind down to a known steady state, for example due to some analogue of the second law of thermodynamics. The difference between Hashlife and “single simulation step to heat death” is only a matter of degree. Does this fact invalidate Hashlife as a suitable algorithm for simulating self-aware creatures, or does it imply that we don’t actually need to run simulations? Perhaps to make a simulation “real” it is only necessary to set up its initial conditions. (Greg Egan’s Permutation City is about this idea.)

Aside: Why don’t we have processor simulators based on Hashlife-like ideas? Although ISA-level simulators do not have the kind of maximum speed of signal propagation seen in cellular automata, the programs they run do have strong spatial and temporal locality, and I bet it’s exploitable in some cases. I’ve spent long hours waiting for simulations of large collections of simple processors, daydreaming about all of the stupid redundancy that could be eliminated by a memoized algorithm.

(image is from here)

Techniques like Hashlife only go so far; to get more speedup we’ll need parallelism. In this scenario, the simulation grid is partitioned across processors, with inter-processor communication being required at the boundaries. An unfortunate thing about a straightforward implementation of this kind of simulation is that processors execute basically in lock step: at least at the fringes of the grid, no processor can be very far ahead of its neighbors. The synchronization required to keep processors in lock step typically causes slowdown. Another ingenious simulation speedup (developed, as it happens, around the same time as Hashlife) is Time Warp, which relaxes the synchronization requirements, permitting a processor to run well ahead of its neighbors. This opens up the possibility that a processor will at some point receive a message that violates causality: it needs to be executed in the past. Clearly this is a problem. The solution is to roll back the simulation state to the time of the message and resume execution from there. If rollbacks are infrequent, overall performance may increase due to improved asynchrony. This is a form of optimistic concurrency and it can be shown to preserve the meaning of a simulation in the sense that the Time Warp implementation must always return the same final answer as the non-optimistic implementation.

Time Warp places no inherent limit on the amount of speculative execution that may occur — it is possible that years of simulation time would need to be rolled back by the arrival of some long-delayed message. Now we have the possibility that a painful injury done to a simulated entity will happen two or more times due to poorly-timed rollbacks. Even weirder, an entity might be dead for some time before being brought back to life by a rollback. Depending on the content of the message from the past, it might not even die during re-execution. Do we care about this? Is speculative execution amoral? If we suppress time warping due to moral considerations, must we also turn off processor-level speculative execution? What if all of human history is a speculative mistake that will be wiped out when the giant processor in the sky rolls back to some prehistoric time in order to take an Earth-sterilizing gamma ray burst into account? Or perhaps, on the other hand, these speculative executions don’t lead to “real experiences” for the simulated entities. But why not?

The Hashlife and Time Warp ideas, pushed just a little, lead to very odd implications for the subjective experience of simulated beings. In question form:

First, what kind of simulation is required to generate self-awareness? Does a straightforward simulator with an explicit grid and explicit rule applications do it? Does Hashlife? Is self-awareness generated by taking the entire simulation from initial state to heat death in a single step? Second, what is the effect of speculative execution on self-aware entities? Do speculative birth and death, pain and love count as true experiences, or are they somehow invalid?

These questions are difficult — or silly — enough that I have to conclude that there’s something fundamentally fishy about the entire idea of simulating self aware entities. (Update: Someone told me about the Sorites paradox, which captures the problem here nicely.) Let’s look at a few possibilities for what might have gone wrong:

The concept of self-awareness is ill-defined or nonsensical at a technical level. It is not possible to simulate self-aware entities because they rely on quantum effects that are beyond our abilities to simulate. It is not possible to simulate self-aware entities because self-awareness comes from souls that are handed out by God. We lack programming languages and compilers that consider self-awareness to be a legitimate side-effect that must not be optimized away. With respect to a simulation large enough to contain self-aware entities, effects due to state hashing and speculation are microscopic — much like quantum effects are to us — and their effects are necessarily insignificant at the macroscopic level. All mathematically describable systems already exist in a physical sense (see Max Tegmark’s The Mathematical Universe — the source of the title of this piece) and therefore the fire, as it were, has already been breathed into all possible world-describing equations. Thus, while simulations give us windows into these other worlds, they have no bearing on the subjective experiences of the entities in those worlds.

The last possibility is perhaps a bit radical but it’s the one that I prefer: first because I don’t buy any of the others, and second because it avoids the problem of figuring out why the equations governing our own universe are special — by declaring that they are not. It also makes simulation arguments moot. One interesting feature of the mathematical universe is that even the very strange universes, such as those corresponding to simulations where we inject burning bushes and whatnot, have a true physical existence. However, the physical laws governing these would seem to be seriously baroque and therefore (assuming that the multiverse discourages complexity in some appropriate fashion) these universes are fundamentally less common than those with more tractable laws.