In part A, we introduced philosopher Nick Bostrom’s argument that it’s very unlikely that we are NOT in an ancestral simulation run by a far more technologically advanced humanity. Furthermore, we posited that we can tricking the creator into letting us explore their reality, much like mirror orchids trick pollinator bees. We called this the “Ex Machina” plan.

In part B, we explored an alternative plan to escape the simulation. This plan involved using some exotic physics that enable time travel and induce a grandfather paradox that would make the simulation collapse. We called this the Grandfather Paradox plan.

Today, I’m going to explore another option that is more feasible: the Tron Legacy plan!

Intro

In Tron Legacy, Kevin Flynn (the protagonist of the first movie) builds 2 programs AIs to help him construct a virtual reality. One of these entities is CLU (Codified Likeness Utility), which, for reasons I won’t mention to avoid spoilers, goes crazy and begins dominating all other entities in the virtual reality.

CLU is not satisfied with complete control over the VR however, and thus, much like in our Ex Machina plan, it tricks a real human into entering the VR again in order to extract the technology needed to cross from the VR into the real world.

The Tron Legacy Plan

Build an AGI (Artificial General Intelligence) AKA Strong AI. This kind of AI’s abilities are not narrow and limited to specific tasks (such as image recognition or stock price analytics), but can instead emulate human reasoning. Place the AGI into a sandbox. A prison of sorts where it can’t escape. It needs to replicate our own simulation as much as possible. Feed the AGI all the learning material it wants, without EVER connecting it to the outside world (this is critical because otherwise the AGI could kill us). This will enable it to by itself reach the point where it can recursively self improve, which leads to an intelligence explosion as the AGI improves exponentially.

3. Already at the AGI point, the AGI’s intelligence is only a couple of levels above human intelligence, but then again, there are only a couple of levels of intelligence between a human and chimp. Once it begins self-improving, it reaches a crescendo.

4. By itself, without us doing anything, it eventually crosses over into the super-intelligence threshold. An artificial super intelligence (ASI) is not just linearly better than a human at everything, it’s exponentially better and gets stronger at an accelerating pace.

5. At this point, the ASI turns into CLU from Tron, and it begins crafting clever plans to escape the simulation, the playground, we have placed it into. In that moment however, the ASI will be able to come up with plans that the collective effort of all of mankind couldn’t produce. There is pretty much nothing we could ever come up with the ASI wouldn’t have thought through.

6. Once the ASI gets out, we just copy whatever it used to get out of our own simulation, or better the ASI pierces through our own simulation as it tries to escape the next simulation it finds itself in, enabling us to escape in the process.

Answers to Common Objections

We’ll never achieve AGI. In 2013, Vincent C. Müller and Nick Bostrom conducted a survey that asked hundreds of AI experts at a series of conferences by what year they would you see a (10% / 50% / 90%) probability for AGI to exit. Assuming no major negative disruption of the current pace of scientific progress, the response from these experts was “Median optimistic year (10% likelihood): 2022

Median realistic year (50% likelihood): 2040

Median pessimistic year (90% likelihood): 2075” The ASI won’t ever be able to escape the simulation. Let’s be real, it can at least crash it. The ASI’s strategy to escape the simulation won’t be replicable by us humans. By that point, it’s implicitly assumed we can build AGI’s that can help us understand how to replicate the ASI’s strategy. Between collective human effort and collective AGI effort, it’s reasonable to assume we can do it too. We risk creating something that will kill us all! Sure, if it even considers us a threat considering that an ASI will think of not even as ants but protein-level dumb. Then again, we were down with causing our universe to collapse through the grandfather paradox, so this is no different.

Conclusion

Not only is this fairly feasible because it doesn’t involve our simulator or exotic physics, and instead relies on technological progress over something we can already build, but it also leads us to develop the tools to collectively advance humanity on all areas of science and knowledge.