T

The creation of global artificial reality is an enormous project, and its adoption will start slowly. In every VR demo I tried in the past few months, I needed assistance to get the gear on and adjust the fit. Most demos required spotters to watch me. There were straps to deal with, cords to trip over, furniture to avoid. The software was glitchy. And too often, the demo required outsiders to suggest that I “turn around and look over there,” because user interfaces are still lame. “Right now VR systems, particularly the tracking, don’t work without constant technical maintenance,” says Jeremy Bailenson, who directs the Virtual Human Interaction Lab at Stanford. “I’ve been running VR for 20 years, and the bane of my existence is driver updates. VR is ready to flourish anywhere it’s worth hiring someone to maintain it.” Some of these problems are the ordinary growing pains of the prototype phase. But there are also some fundamental features missing. Chris Dixon, a partner in venture capital firm Andreessen Horowitz, who led his company’s early investment in Magic Leap, thinks VR will follow the flywheel effect: sluggish to start, its momentum slowly compounding until it’s nearly unstoppable. “What gives me hope is how good VR is right now,” he says. “Once people experience high-end VR, they’re going to want it. We’ll look back on 2020 as the VR era, but in the next five years I’m bracing for the inevitable trough of disillusionment in the hype cycle.”

As the flywheel slowly begins to turn, friction will hinder its rotation. But those friction points should also be viewed as fresh opportunities. These are problems whose solutions will enable many other innovations. Any of the following pain points might be the opening that produces the first VR billionaire:

The Dork Factor

There’s no getting around the fact that everyone looks like a dork wearing a head-mounted display. It obscures our humanity. The failure of Google Glass was in large part due to the fact that you could not pass the cool test wearing one. Remember the Segway, the stand-up personal transport? If you haven’t ridden a recent version of it, you should; they’re amazing. But even though the scooter really works, it didn’t revolutionize transportation, in part because people looked ridiculous riding it. The form factors of VR and MR have a long way to go before they become culturally invisible.

Safety

I nearly fell in a recent VR journey because I tried to jump into a pit that wasn’t really there. Oculus weirdly warns its users to “”remain seated at all times. The problem is, if you’re present—really present—in an alternative place, you’re absent from the place your body is. That’s a recipe for accidents. Mixed reality, where the room you’re actually in remains visible, can diminish the clumsiness between realms but doesn’t eliminate it. Then there is our ignorance of the long-term effects of fooling your mind and body. This is so new we don’t even know yet what questions to ask. We do know that motion sickness is real. Jeremy Bailenson found that approximately one in 30 are susceptible. But what other problems will arise after tens of thousands of hours of use?

Inadequate Interface

At this moment in its development, VR is at the same infant stage as early PCs that required a command-line input. There are no intuitive tools for easy creation. The VR industry is waiting for its Doug Engelbart to invent the equivalent of the mouse. This shortcoming is perhaps the most critical missing piece preventing a rapid takeoff. Without an interface that anyone can grasp in minutes, content can be made only by the truly dedicated.

Nearly all of the non-movie VR experiences uploaded to date were created using a computer-game engine from either Unity or Unreal (and nearly all VR so far shares a similar videogamey look too). All these first-generation experiences were created with 2-D tools—screen, windows, mouse. But VR cannot reach ubiquity until the tools for VR creation live in VR itself, until VR is bootstrapped from within VR. The first steps toward native tools were announced this spring. Both Unity and Unreal have demo’d a VR version that permits users to make VR in VR. However, to foster a smooth transition, the VR versions of both creation engines import 2-D metaphors (like menus)—the equivalent of a command line—into VR. Still missing is the breakthrough insight that takes advantage of VR’s peculiarities to deal with VR’s complexities.

I had an aha moment inside a VR app called Tilt Brush that was purchased by Google. I was using a brush to paint with light in three dimensions. My traces in the air could be thin, thick, flickering, pulsating, solid sheets, of any color. I was inside my creation, moving around with my whole body, working up a sweat. I was sketching a sculpture or sculpting a sketch or architecting a drawing or dancing up a building of light—I don’t know what to call it, but it was the most fun I’ve ever had in VR. And it’s not just for fun. Trials at Google revealed Tilt Brush could be an ideal prototyping tool. In a few minutes, even an untrained person could sketch out a design for a car or the layout of furniture in an office, and you would instantly see it.

My aha was that at its root, VR is as much a creation tool as a consumption tool. As much fun as it was to explore VR, it was more fun to make it. For a long time, no one believed amateurs would make their own videos, but that changed when you could easily film a scene by holding up a phone. VR is in line to reduce the barriers to creation even further.

Fame awaits the genius who figures out the elegant VR interface for VR creation. The tools would allow you to manipulate 3-D space with minimal gestures, voice, and gaze. You’d lift, twist, speak, and nod just so. I suspect there would be a beauty in watching a skilled creator work in VR, much like in watching a woodworker or dancer. A universal interface for working in VR would unleash the greatest expression of creativity the planet has yet seen.

Narrow Field of View

Right now the field of view in mixed-reality devices is too narrow. Of the current crop of MR spectacles, Meta 2’s field of vision is the widest, but even its coverage is inadequate. Virtual objects that are located directly in front of you, within the coverage of the screen, appear present. But when you turn your gaze away, they disappear from your peripheral vision. This breaks the chain of persuasion. Fully enclosed VR devices don’t suffer the same drawback; because you see nothing at all in your peripheral vision (only deliberate blackness), you don’t get contradictory information. Objects disappear when you turn, but the background area does too.

All mixed-reality systems labor under a second challenge that VR systems don’t: Ideally, in a mixed reality, the virtual teacup you see on your desk would be lit with the same kind of lighting, from the same direction, with the same color tone, as your real desk. To do that would require outside cameras and software that dynamically computes the lighting in the room in real time. No mixed-reality rig can do that now. The mismatch in the lighting is another weak link in the chain of persuasion. In my experience, this discrepancy tends to produce an effect I would call “artificial things really present.” You don’t confuse artificial objects with real things really present; they are artificial things really present.

Tethers

It’s hard to overstate the benefit of wearing a lightweight device that is not tethered to a fixed location. Being free to roam deepens the sense of presence, while worry about a cable tends to disrupt the spell. Screens and processors can be made much smaller, even down to a size that will fit invisibly into glasses, but batteries are the bugaboo of VR. The computational load of VR is so huge that untethered headsets will be very difficult to fuel. It’ll be a long while, if ever, before a day’s worth of battery power can be squeezed into the frames of glasses. For now they will be wired to a battery in your pocket.