For decades now, virtual reality has been a pipe dream concept, well ahead of the technology needed to realize it. Generating a convincing 3D world that precisely and instantly matches the head-tracked position of a player's gaze was well beyond the headsets that proliferated in research centers and on the market up through the '90s. It has only been recently that products like Sony's prototype gaming headset and the upcoming Oculus Rift have seriously attempted to create believable virtual reality headsets using modern head-tracking and display technology.

But there are some who think the technology in these systems still hasn't been developed far enough to create a truly believable, head-tracked virtual reality. Valve's Michael Abrash laid out this case in a detailed blog post last weekend, suggesting that VR headsets need a "Kobayashi Maru moment" to solve the inherent problem of display latency that plagues current and upcoming headsets.

Current non-VR games usually bottom out at about 50 milliseconds (ms) of latency between a controller input and the time the pixels actually update. That's more than fine when viewing an image on a stationary screen, Abrash says, but VR systems need much better latency in order to trick the brain into thinking it's looking at a virtual world that completely surrounds the player wherever he or she looks. "The key to this is that virtual objects have to stay in very nearly the same perceived real-world locations as you move; that is, they have to register as being in almost exactly the right position all the time," Abrash writes. "Being right 99 percent of the time is no good, because the occasional mis-registration is precisely the sort of thing your visual system is designed to detect, and will stick out like a sore thumb."

To be nearly indistinguishable from reality, Abrash says a VR system should ideally have a delay of 15ms or even 7ms between the time a player moves their head and the time the player sees a new, corrected view of the scene. The Oculus Rift can achieve latency of about 30 or 40 milliseconds under perfectly optimized conditions, according to creator Palmer Luckey (this doesn't take into account the added delay inherent in the physical display itself; more on that later). While Luckey acknowledges that this is slower than the "real world" modeling ideal, he says he thinks the Rift is more than capable of creating a convincing virtual world.

"The Rift developer kit has received a lot of positive feedback from those who’ve tried it, but there’s no denying we’re still a ways away from perfect VR," Luckey told Ars. "It's a difficult question, because 'convincing virtual reality' is very subjective... You can be very convincing without necessarily being indistinguishable from reality."

That certainly describes my experience with a prototype of the Oculus Rift at the Penny Arcade Expo in September. To me, the delay between my head movements was practically unnoticeable and much smoother than any other VR headset I had tried before. I could tell I was looking at a screen, obviously, but it wasn't the kind of jarring, "which way am I facing" experience of some other VR systems. I did get a little nauseous during the experience, but that was more from using the controller to turn my view without moving my head, rather than any delay in the virtual world I was tilting my head in.

Physical limits

Luckey says he and his team have been doing everything they can to get that latency number down, including creating their own head tracker that works more quickly than prepackaged solutions. But even if the Rift software could generate and transmit a perfectly aligned 3D perspective instantaneously, there's a significant bit of "motion-to-photons" latency introduced by the refresh rate of the display.

The standard 60Hz displays used in most phone-sized LCD displays (like the ones used on the Rift) are perfectly fine for your iPhone, but they introduce about 15ms of extra delay as the image is drawn pixel by pixel in front of your eye. "It’s actually more complicated though, because the image is drawn line by line, meaning pixels on the bottom of the display begin switching before the entire image is draw on to the screen," Luckey says. "On top of that, a pixel does not have to completely switch for motion to be perceived; you can see motion even if the pixels are in the process of switching."

Abrash suggests that using 120Hz or even 240Hz displays would help a VR system get down to that holy grail of a 7ms delay, and increased scan-out speeds could help even further. Luckey agrees that a better refresh rate on standard, low-cost displays would help matters, but he says that features like higher resolution and better positional tracking would help even more. "Luckily for the VR community, the massive mobile phone market continues to help us solve many of these challenges," he said.

One way to short-circuit that kind of inherent hardware delay is predictive head tracking. By guessing which way a player is going to move and pre-rendering the correct display for that view, the apparent latency could be cut down drastically.

Luckey says the Rift team has looked into this potential solution, but he says it's "no silver bullet." While it's often relatively easy to guess which way a goal-oriented gamer will want to look next, things "[become] especially tricky when trying to predict when the player will stop moving." In some cases, using predictive tracking can "actually be worse than no prediction at all," he added.

Valve has been long-rumored to be working on its own virtual or augmented reality headset, and Abrash didn't respond to a request for comment on the development of that potential project. But his blog post suggests that he sees the journey to the perfect VR headset as just beginning.

"It's my hope that if the VR market takes off in the wake of the Rift's launch, the day when display latency comes down will be near at hand," Abrash wrote.