In 2015, something amazing was revealed to the world. Alex Kipman, a Technical Fellow at Microsoft, took the stage, and proceeded to share a glimpse into the future, a device we now know to be the HoloLens.

We watched in awe as we saw the first device, one of many to come, that would completely change the way we interact with reality. Since the inception of the digital age, we’ve been forced to interact with our content almost entirely through the confines of a screen. Although this interface has been vital to the success of humanity, there was always the expectation that at some point we would evolve beyond it.

That time is here, and we’re fast approaching the transition away from screens and into a reality where digital content co-exists with physical objects in the world around us.

With digital content trapped within the confines of a screen, the possibilities are limited to the experiences we have been having over and over for the past 20 years. The freedom we all crave happens when that digital content can escape the TVs on our walls, the monitors on our desks, and the phones in our pockets. The idea of this content being outside of screens has been on our mind for years. We have seen it in Hollywood movies such as Minority Report, Iron Man, The Matrix. And now that we are starting to have this become part of our reality, a mixed reality, the possibilities are limitless.

In its current state, digital devices isolate us, preventing us from building relationships, context, and ideas about the people, places and things around us. We believe that we are now in a transitory state, one where the technology stops being an impediment to building connections with our physical world.

In the relatively near future, technology will advance to the point that it becomes invisible, and helps us foster the relationships, context and ideas it formerly impeded. Within our current reality, we are either constrained to interacting with technology in places we don’t choose, like our homes or offices, or we have the option of taking the technology with us, but it is constrained by the devices we are willing to carry around with us. The new reality, the next reality, will be when we use the entire world as our interaction with technology. Then we will return to exploring the wonders of both the physical and digital worlds as if they are one of the same.

Ingress, Niantic’s predecessor to Pokémon Go, was one of the first to show us how this new Mixed Reality will motivate us to explore the world again, experience the beauty of nature, and care for the places we live. The future we dream of is not far off, and it will be easy to imagine when we walk outside and see a pack of teenagers run past us down the sidewalk, completely immersed in a tactical operation to fight crime in their themed world. It will look like a couple of children discussing and showing each other the new virtual pets they tamed yesterday at the park down the road, or our newly-wed neighbors planning their dream landscape together in the front yard. We’ll know we’re there when cleaning up our streets is no longer a chore, but turns into a gamified mission or quest.

The key detail to enable this new reality is that it must be shared.

Without the ability to share the mixed realities we will be creating, consuming, and immersing ourselves in, then all of this new technology does nothing more than keep us as isolated as the screens we currently interact with.

Imagine a world themed like the last movie you saw at the theater (hopefully it wasn’t a Stephen King movie), wouldn’t it be amazing to be completely immersed in that reality? Now imagine if that world was not shared. Imagine your friend being unable to play along because he or she chose a different device manufacturer.

We want to see all the cool holographic objects our friends have placed around their house, or the Niagara Falls themed wall in your living room when we walk through the door. We don’t want an isolated experience, that’s just lonely. We need to avoid the dystopic world that some have pictured of people walking around waving their hands around in the air, completely immersed in their own world. We need this to be a shared world, and if it is, we can have a more collaborative future instead of one that promotes division.

The Path to get there

We believe there are three building blocks that need to come into realization over the next few years to enable this mixed reality future. We need:

Hardware that will enable us to see, hear, feel, touch, and taste it; People that will create it; and, Software that will enable us to experience and share it.

Hardware is on the way; right now, we are in the “brick-phone phase” of these devices, but we’ll quickly move into a phase of rapid iteration, where devices become lighter, thinner, and faster with each reveal. We’ll move from a two-pound computer on your head (already incredible) to devices the size of a normal pair of glasses. From glasses, we’ll move to a small band that sits below your gaze and projects light into the retinas much like we perceive light from our natural environments today, and then we’ll transition to a period that requires zero social acceptance as they become contacts and integrate further with human biology.

As hardware becomes more available, the developers will become ever more excited about building cool things for this field. We can already see a glimpse of this excitement with ARKit and ARCore, but when hardware becomes available to the mass-market, we’ll see exponential growth in the number of ambitious people who are inspired to build amazing things.

And we need software. Specifically, infrastructure and killer applications. Hardware can only promote adoption to an extent, so it’s important that we have great content that propels people towards the purchase of a device. It could be one app, or it could be 10, but we need the software that makes hundreds of millions want to jump into this new world. Before we can have killer applications though, we need infrastructure, we need to enable people to create experiences that they can share with others.

A Practical Path

From our inclusion in the first wave of HoloLens devices released, we were among the first teams to start learning the new inputs that these devices provide. We learned fast and understood early on that mobile AR experiences don’t necessarily translate well to an immersive Mixed Reality experience. On top of our intimate understanding of how experiences will differ, we saw the importance of building a sharable world, and how that would become the most important piece of a collaborative computing experience.

We came up with a strategy, a strategy that will change the world. This strategy ultimately became the Pillars of Practical, which acts as our foundation for gathering the data that enables this mirrored world.

• Computers need to understand the world around us.

• Computers need to understand what is in the world around us.

• Computers need to understand what we do and what we do with the things in the world around us.

MAPS. OBJECTS. ACTIONS.

MAPS

To enable shared experiences, digital content need to be tethered to points in 3D space. This requires computers to understand what space we are in and where we are within it; spatial localization. We have found a few methods that will allow us to quickly localize a person to the room level, and load objects within it. We’ve set our infrastructure up to not only enable us to receive map data, but also send maps and the tagged information within maps back to users.

Our innovation in this area has simplified a major UX issue we saw with the emergence of this technology; it is not easy to map a space. First, developers are required to create the experience encouraging the user to walk around the location and look at every surface. We think developers should be able to focus on apps, not maps. Furthermore, and while this may get better as devices improve, it takes a long time to map a location, and maps can become cluttered. Users shouldn’t have to spend 10–15 minutes to create a clean map of their location. We’ve solved this and will soon begin to build a 3D point cloud that mirrors the physical world.

2. OBJECTS

We have map data, great, what’s next? Computers need to understand the things within our environments. We aren’t looking for image based recognition here so we can overlay an advertisement or trigger an experience there like others are working to pursue, we need to understand what they are and their functionality. To provide full immersion, the computer powered worlds of our future must be able to procedurally include objects into experiences, and not only include them, but give them purpose. We also need to prepare an index for when virtual truly does become physical, so that the world truly is mirrored in every aspect.

3. ACTIONS

Think about walking down the sidewalk of a nice park and seeing an empty bottle in your path. As a good citizen, I care for the environment that surrounds me, so I would pick up that object and toss it in the closest recycling bin. I see that entire scenario as an action, and if a computer can see that action, we can teach a computer to understand that action, if a computer understands that action, it can confirm that action. So let’s make an API for the understanding of that action.

Once that understanding API is achievable, what’s to stop a municipality from depositing cryptocurrency-driven bounties to their citizens and rewarding them as they clean up around town? What does a labor driven company become at that point? An application that uses an API to pay a technician as he connects the last TV box in a household? It all ties in to enabling full immersion into our new mirrored reality.

The PRAX, Practical’s token.

Obviously, the data required to enable something of this scale is immense, and not one entity could ever deploy enough employees to collect the entire world. This future needs to be crowdsourced, cross-platform, and have early purpose. This future needs rewards and uses within to drive an economy. This future needs to have a strong care for privacy and a path to decentralization.

We built Practical Analytics, the first Mixed Reality insights platform. Creating our product for the HoloLens gave us significant knowledge and foresight about how we can better enable developers to make applications for this emerging technology. Now that we’ve mastered the experience on the devices that will enable that future world, we can begin to recreate aspects of our insights on other platforms, so stay tuned for more information to come on the mobile side.

This fall, we’ll be releasing an expansion to our existing product specifically to handle, collect, and provide insights into map based data. We’re even more excited to soon reveal the Practical Economy, the economy that will sit alongside this expansion into mapping to provide rewards and uses as we create and enable this mirrored reality.

For the latest and greatest info, go to prax.practicalvr.com and sign up for our mailing list.