"Without going into a rant, the term 'Retina Display' is garbage, I think."

Palmer Luckey, the founder and creator of the Oculus Rift, is a bit of a perfectionist when it comes to creating the best possible virtual reality experience. So when our recent interview turned toward the ideal future for a head-mounted display—a theoretical "perfect" device that delivers everything he could ever dream of—he did go on a little rant about what we currently consider "indistinguishable" pixels.

"There is a point where you can no longer distinguish individual pixels, but that does not mean that you cannot distinguish greater detail," he said. "You can still see aliasing on lines on a retina display. You can't pick out the pixels, but you can still see the aliasing. Let's say you want to have an image of a piece of hair on the screen. You can't make it real-size... it would still look jaggy and terrible. There's a difference between where you can't see pixels and where you can't make improvements."

Quibbling about aliasing on life-size hairs may seem nitpicky, but that's the level of detail Luckey is thinking about when considering how far VR can eventually go. "To get to the point where you can't see pixels, I think some of the speculation is you need about 8K per eye in our current field of view [for the Rift]," he said. "And to get to the point where you couldn't see any more improvements, you'd need several times that. It sounds ridiculous, but HDTVs have been out there for maybe a decade in the consumer space, and now we're having phones and tablets that are past the resolution of those TVs. So if you go 10 years from now, 8K in a [head-mounted display] does not seem ridiculous at all."

More pixels, fewer problems

Before we get to that level of in-your-face fidelity, Luckey said that improving the current resolution of the Rift is one of Oculus' main areas of focus. "The visual side of the Rift is one of the most important ones," he said. "Every time you can throw more pixels at it, it really makes a huge difference. On phones we're at the point where throwing more pixels at it is not really improving it that much each time, but [with] VR, we're still at the point where [doubling] the pixels is still a clear, noticeable improvement."

This much is evident in my hands-on time with the Oculus Rift HD prototypes that the company was recently showing off at the Penny Arcade Expo. Flying around in a Hawken mech with a 1080p display strapped to my face was a much more engrossing experience than playing the same game on the comparatively muddy Oculus Rift Developer Kits that have already shipped to early customers. You can still make out the slight borders between the pixels on the HD unit if you really focus, but the extra resolution makes it much easier to make out details in the environment like small text or far-off enemies. Something about the increased resolution also seemed to help limit the nausea that I sometimes experience on the lower-resolution units.

Oculus VP of Product Nate Mitchell said that the final consumer version of the Rift will be "at least as good" as the HD prototype when it eventually sees release (though the company still isn't sharing any details on timing or pricing). "This was like our favorite screen five months ago," he said, implying that there's one they like more currently, and another they will prefer in five more months.

"We're not pulling that trick where we show something awesome like a concept car, then something actually comes out and they say, 'This is garbage,'" Luckey added. "That would be really evil of us to do. There are a lot of deficiencies in that with the optics and the screen and really everything about [the prototype]. The consumer version is vastly improved in several key areas, and we want to focus on consumerizing that and getting that shippable rather than improving on this thing that we know can never be really great."

Latency and position

Another major issue for delivering truly realistic virtual reality is latency. The time between moving your head and actually seeing the correctly rendered view for that angle is key to creating a believable space. But Luckey and Mitchell told me that this is close to being a completely solved problem as far as the Rift itself is concerned.

"We can get our hardware well below the threshold for human perception, without astronomical cost," Luckey said. "Our hardware will over time beat all of the latency out of our end of the pipeline... but it's not all up to us."

That's because the hardware is only part of the latency equation. Even if the image lag introduced by the Rift is no more than the one or two milliseconds (ms) imposed by the USB transfer cable (much smaller than a human can perceive), the game engine has to be able keep up with that level of performance.

He only does everything When I asked Mitchell about the most important thing that When I asked Mitchell about the most important thing that recently hired CTO and Doom developer John Carmack brings to the Oculus team, he was a bit non-committal. "He has a really good handle on software and hardware and graphics," Mitchell said. "It sounds kind of lame to say 'He focuses on everything,' because that means he's not focused. But he really is working on so many things because he's probably one of the few people in the world that has such a grasp on the whole pipeline from end to end and who can see where improvements need to be made in that pipeline and actually execute on those improvements." "One of the things that's great about [Carmack] is that he's super-focused not only on today's VR but on the vision for VR," he continued. "How do we improve the user experience, what should be the standards that we set, how do we achieve that level of quality? That's been a big part of what he's done, coming in as CTO, is looking at the roadmap and [asking,] 'How do we improve for what's coming, the future of VR, and how do we really set the bar?'"

"If they're running their game at 30 frames per second, then there's going to be a huge amount of latency in the VR experience," Mitchell said. "That's why if you look at a game like Team Fortress 2, which most of the time runs un-V-synced, just as many frames as you can throw at it... you can run that game at a really high frame rate, which gives you a lower latency overall in the experience."

Even if you can't get up to 500 frames per second in software, though, there are some predictive tricks that the hardware can use to basically "cheat" a bit more latency out of the experience, Luckey said. "It turns out that humans are pretty predictable machines—we know a lot about how they work. So if a person barely starts turning their head, what we can do in the sensor is say, 'Well we know they're starting to rotate, and this is their velocity curve.' They can't instantly stop, so you can actually predict into the future."

"Prediction has gotten a bad rap over time because they say, 'Oh you can only predict 50 ms into the future and that's not enough,'" he continued. "But when our system is well under 20 ms [of latency], which I think is going to happen very quickly, you can predict 10 ms into the future with basically no problems whatsoever. So prediction gets you down to the level where you have zero perceived latency, even if you have more actual latency."

Keeping latency down will also be key when implementing plans for positional tracking—keeping track of the user's position as their head moves side to side or forward and backward rather than just rotating in place. Mitchell called this the most important thing that needs to be added to the current Rift prototypes to help the VR experience. The technology to do this kind of tracking is readily available, but for a consumer version, Mitchell said Oculus needs to consider cost as well as the size of the area being tracked. "We don't necessarily want something where you can sprint around the room," he said.

"Positional tracking isn't a 'just go buy the hardware problem,'" Luckey added. "We can buy the hardware at a pretty low cost today and build something shippable, but the software side is very difficult. It's like Kinect; a lot of the hardware was done by PrimeSense, but PrimeSense didn't make the Kinect... Microsoft had to spend piles of money to develop this robust software around it."