Reality 2.0 is a way to upgrade your current perception of reality. In theory it’s fairly simple to explain how it works: You scan your current environment, create a fully virtual 3D world from your scanned data, combine this with a room-scale VR experience and sync both, and… you’re done. Easy — right?

Here are a few pictures explaining this process:

Take the real world and scan it

Ideally using structure sensors that create point cloud information and not normal cameras.

2. Create a fully virtual 3D environment

Some fancy object & geometry recognition and mapping is required in this step.

3. Create a room scale VR experience & sync both worlds

Every move you make is tracked and permanently synced & scanned.

The ideal scenario that should happen when you put on your VR goggles is… well… nothing actually. Ideally everything should stay exactly the same way it was without the goggles. The main difference however is that you see a fully virtual world instead of your current physical reality. If you get there, you can change anything you want in your virtual world, while still being in sync with your real world.

This always seemed pretty obvious to me, though I had quite a bit of trouble explaining it to other people. I realized that words were inadequate, and sat down for 3 days to build a simple prototype/dummy in my living room. Here is a demo (more examples below).

I showed this to some people in the last few days and there were usually two different reactions at this step:

Why should I wear VR goggles if I see the exactly same thing I would see without?

Short answer: You can change your reality… I don’t see how you could earn money with that?

Short answer: You can change your reality…

Ok, ok — I sense some critics from the current VR and AR users out there. Let’s tackle them real quick.

Why not use the VR stuff we already have?

There are some caveats to the current state of VR technology — mainly: People lose touch with their current reality. At this point I’ve witnessed more than a few people experience VR for the first time in their lives. The first thing they always do is move. Given a room scale experience like the HTC Vive they actually can move, until they bump into a table or a wall, because they are not familiar with the secure zone. So then they have to teleport — now try explaining that and coordinate the locomotion. VR will never reach the critical mass, if people have so much trouble switching their perception.

Ah and don’t forget that you actually have to wrestle with up to nine buttons per hand and controllers that you need to move in your real physical environment…

There is some improvement coming up in the VR area, which I cover below.

Why not use AR? AR is already doing it!

AR is an awesome tool if you want to project a virtual object, like for example a pokemon, onto a real surface, like a table. What AR can’t do right now and probably won’t be able to do for quite a while, is to actually remove objects from my perception (you could in theory, but its way more complicated than the reality 2.0 approach). There are different AR scenarios worth mentioning though:

Smartphones & Tablets

Obviously you won’t achieve the same results I’m looking for while holding a smartphone or a tablet in front of your face — so let’s skip this one. Hololens & alternatives

The Hololens is a fine tool, but considering the delay and low field of view there is no chance to create reality 2.0 (did I mention the 3.5k + price tag?). Even if you solve the delay and FOV issues you are still left with the fact that you can’t remove and change existing things but only overlay them. Magic leap — well this is fantasy for now and probably won’t cut it either.

In my opinion AR is one of the most overrated technologies we currently have (after actually building some Hololens applications). Overlaying stuff is interesting but full control over your virtual world, is where the magic happens.

Some possibile scenarios

You are probably curious what reality 2.0 will let you do to your environment. There is a short answer to this: Everything you can possibly imagine. Because most people lack imagination though, I’ve included 3 scenarios in my dummy (please keep in mind that this is a dummy we are talking about and these are just 3 examples of many, many more):

Scenario 1: Interior design, advanced arch-viz, product development

Change parts of your environment while keeping the basic geometry the way it is. As you can see my perception of my couch or my walls is changing, but my real couch and walls stay the way they are. This obviously does not only work for color but also for materials or the whole object. For what it’s worth, I could be sitting on a ancient roman sofa instead of my white leather couch, as long as the dimensions stay the same.

Imagine choosing the interior of your new car while sitting in a dummy car and switching parts back and forth. Or simply changing the style of your home based on your current mood.

Scenario 2: Useful fully virtual workspace

Ever tried working in VR or AR? The first is a real pain, because you lose contact to your physical environment which can be disastrous — for example if your cup of coffee is located just next to sensitive equipment or papers — just to name an example. Not to mention the position of your chair or table themselves. AR? Well if you spend 3–4k on a Hololens and don’t have a monitor or any other thing standing in your way, because YOU CAN’T REMOVE ANYTHING USING AR. I, for my part, prefer to have full control over my environment and reality 2.0 is a good way to go. Especially if you ever had to work with 3D-Objects or Data — which is way better in a fully virtual environment.

Scenario 3: Virtual travel & entertainment

Very simplified cinema example…

Instead of sitting in a virtual reality cinema, how about transporting your living room or just the couch into a virtual cinema. There are no limits to this: You could enjoy a movie or experience while being a part of the world (like real 3D environments with people moving all the way around you — including interaction). You could turn your living room into a cabin in the mountains, your walls turn into wooden beams, and the outside world into a mountain scene. Or how about a beach hut — where your couch turns into rattan and your environment into beach scene. And so on and so forth — I hope you get the idea. Even more interesting is the concept of fusing different reality 2.0 environments in real time — like experiencing a different living room somewhere in Australia while being in the opposite part of the world.

How to get there and scale things?

There is quite a lot of work that needs to be done in order to actually make reality 2.0 a real thing — pun intended. This whole concept will only work on a large scale if it happens in real time. If you can add any new object to the scene, and it will be automatically recognized, turned into a virtual 3D object and added to your virtual environment. But here is the good news:

Object detection algorithms are really good by now and can detect objects in real time beyond the human capability, even while using a standard camera and not even structure sensors. Speaking of the devil: Structure sensors and inside-out-tracking is going to be a default thing for the new mixed reality platform microsoft and partners are trying to push into the market. Ever heard of meta objects? This could be an enormous shortcut into making this happen. Meta objects can most easily be described as being a super class object of the same type, like a white leather couch is essentially a subset of a meta couch. Meta objects are used in some really interesting AI algorithms that can interpolate every individual object from a super class. Add multi user support and a central data base for objects, materials and spaces and you could save new users a lot of time by using pre-existing data. Add layers, based on the fact how often the objects, you are trying to scan, are changing.

Future outlook

Imagine super light VR Googles, maybe even lenses at some point, that have a resolution close to the one of the human eye (around 8k). Now combine that with more computational power and better algorithms and your ways of changing your physical reality are virtually unlimited — again, pun intended.

This future is much closer than you might think. A few years —( +/-).