Advances in immersive head mounted displays jump-started by the Oculus Rift are poised to change the way we communicate, collaborate, and share information. Up until this point, we have been limited in these applications by the display technologies that we have available – mostly some combination of flat screens. For some kinds of content this works just fine. If I’m working on a text document, it makes sense for it to be a flat page on the screen onto which I can type. But there are many kinds of content that would benefit from a better interaction paradigm.

Consider architects designing buildings, or engineers designing parts. These fields benefited greatly when 3D rendering came on the scene, but 3D environments shown on a flat 2D display are only part of the story – architects who have glimpsed their design on an immersive HMD are blown away. In education, the ability for students to take "virtual field trips" using video conferencing has been compelling, but seeing the Lincoln Memorial on a screen cannot compare to feeling like you are standing there. Equally compelling will be the ability to feel present at historical events, or in environments we cannot actually visit, such touring our solar system. Finally, research has shown that collaborations that use video conferencing are more effective versus audio only, and this increased efficacy will be multiplied exponentially when both parties feel present in the same virtual environment.

To achieve the promise afforded us by virtual reality, we will need to work together to build a cohesive set of virtual environments. Our community calls this vision the Metaverse (with love to Neil Stephenson). And although the actual implementation is unlikely to appear as a dark and brooding cyber-punk universe, Stephenson's vision was solid – we will by necessity begin to converge on a common way to travel between virtual environments, exchange virtual objects, and take many of the interactions we do today in the physical world into a virtual one.

Some people call this next step in our evolution the "3D web". I think this is a useful way to frame the problem, as we will use many of the existing building blocks already in place for the web, and will be addressing many of the same issues when it comes to standards and interoperability. But this framing can also cause confusion as people try to imagine how their favorite web pages might appear in three dimensions, and cringe at the thought of clicking links on pages floating in the air.

But this is a red herring – certainly pages designed to simulate flat documents are best represented on flat displays. Immersive virtual environments give us new capabilities that will require us to think in new ways about how we share information and communicate. Instead of visiting a car manufacturer's website and looking at pictures of cars, we can visit a virtual dealership and take a virtual test drive. Instead of reading about Michelangelo's David on Wikipedia, we can virtually visit the Academia in Florence and see the scale and detail ourselves. Instead of watching the video of a concert on YouTube, imagine feeling like you are sitting in the front row. The possibilities are endless – and I would wager we haven’t even thought of the most compelling ones yet.

There are efforts underway to move us in the direction of the Metaverse, one of the most notable being JanusVR. This application was created by James McCrae and gives us a glimpse of some of the possibilities. Users are already creating their own virtual spaces, filled with wild and wonderful things, and linked together by the JanusVR framework. Instead of web links, environments are linked using portals that resemble doorways – select the doorway to load a new environment, and step through to travel there. To set appropriate expectations – many of the environments are rudimentary, as are the avatars and the ability to interact with each other. Years in the future we may look on this as we now look back on the Mosaic web browser and the first HTML web pages – but it is an auspicious beginning.

By now you may be wondering if I am ever going to mention the HTC Vive. JanusVR and similar efforts to this point have been focused on the seated experience enabled by the Oculus Rift. To travel through the JanusVR universe, one must direct their travel as they would in a first-person video game, using an external controller or mouse/keyboard. This works but is a significant disconnect in terms of immersion, and can cause discomfort ("sim sickness") as the inner ear is disconnected from the motion your eyes are sensing. The Vive changes all this in a fundamental way – users are now able to physically walk around and explore a virtual environment as large as the room they are using in reality. This is blending the lines between virtual reality and augmented reality, which is important for immersion – until we develop a Matrix-style neural interface, this best way to make virtual experiences feel real will be to layer them on top of our actual physical environments.

To be more specific – instead of creating a virtual environment of an arbitrary size that must be navigated artificially, imagine creating one that dynamically scales to overlay the physical environment you have available in your home. Just as web pages today are designed to accommodate varying screen resolutions from 640x480 up to 1920x1080 and beyond, virtual environments can be designed to accommodate smaller rooms and bigger rooms (today the HTC Vive supports rooms up to 15'x15'). Physical walls can be represented as walls in the virtual world, or as something more abstract like windows, trees, or cliffs. In the future, advanced environment scanning could enable interaction with more types of physical objects such as chairs – it might be nice to be able to sit sometimes!

Finally, the HTC Vive is closing the immersion loop by improving the interface for interaction. Instead of abstracting your movements with a controller or mouse, the Vive puts a controller in each hand that maps to your hands in the virtual environment. Even this is an intermediary step as full body positional tracking will eventually be enabled with no controller at all – think the Xbox Kinect on steroids. But for now, the Vive's controllers provide a way to reach out and touch the virtual environment overlaid on your physical space.

This is an exciting time both for virtual reality and for the evolution of the web. As these solutions are released to consumers later this year, we will experience an explosion of applications and interest in this space. The research and education community is well positioned to help guide the evolution of the Metaverse toward standards and interoperability. This is the goal of the Metaverse Working Group, which is open for participation by anyone in the community – I would encourage readers of this blog to consider joining the conversation. I'm not sure where the conversation will take us, but I’m excited to be along for the ride.