I got the sense from the talk that you feel very strongly that vsync needs to be there, and that frame rates need to be as high as possible, and that frame rates are more important than complexity of geometry.

PL: That is absolutely true. I think that it's really important in VR to keep in mind that sometimes you do have to sacrifice fidelity for framerate. Because you do want to have vsync, because if you don't have vsync then you have tears in the world. So you'll actually have objects that appear like they're being sheared apart or that they're actually shifted relative to each other. And that takes you out of the game very quickly, as that's constantly happening as you look around. So you do need vsync.

It's worth mentioning that a lot of the hate out there for vsync -- like a lot of people go, "Oh, you should turn it off, because it adds latency in games," [but] when vsync is done correctly, it doesn't necessarily add a ton of latency, or a perceptible amount of latency, in VR. But there are also a lot of games that do a very poor job of vsync. There are even a lot of games where you want to turn vsync off in their control panel and then force it using the Nvidia or AMD control panel because it does a better job through there.

Good vsync -- yes, you really need to do it, or the whole world appears like it's tearing. And you really need at least 60 frames a second. Right now we're saying 60 because that's what our hardware is capable of running, but if we had a display that could run at 90 frames a second, it makes a huge difference. Enough where you're going to want to run at 90 frames a second as often as you can.

You already went from 800p to 1080p for the dev kits. How far do you want to push upgrades and improve the tech?

PL: 8k per eye. [laughs] I mean, that's just a number where you could roughly, approximately stop seeing pixels at the current field of view. Realistically, we're not targeting any specific resolution as "this is the right resolution" because until we get to that 8k by 8k or higher resolution, we just want it to be as high as possible. We're at 1080p in the prototypes that we're showing, but we'd like to push it even beyond that.

What most surprised me about playing it was the FOV, actually. Yes, I could see the pixels, but that was maybe less unrealistic, because I'm used to pixels. But the FOV stopping was a bit odd.

PL: The HD prototypes that we're showing do have a lower field of view than the dev kit, because we're using the same optics that we used for the dev kit in these prototypes. We just swapped out a panel, we didn't change the optics, or the ergonomics, or anything else.

The field of view for the consumer version, we do plan on increasing. And not just the field of view, but also the clarity of the optics and their sensitivity to adjustment, so that people can have a much more clear view across their entire field of view, rather than having it blurred in the edges.

For me, in some way -- and I'm sure you've done much more research, and this is just me saying this -- but the one thing I'd expect to create a sense of reality is peripheral vision, and feeling it wrap around you.

PL: I totally agree. It's a lot of different trade-offs. Optically, as you go past 100 degrees, there are a lot of limits of optics you run into, and it can't just be solved with clever design. They're just the hard limits of refractive optics, and it's very hard to get around those. You can greatly increase the size -- like, if you double the size of the panel, then you can get a little further, but you're not doubling the field of view for doubling the size of the panel. It's diminishing returns. You end up with a huge headset with a slightly improved field of view. There are a few tricks that I am trying that I think that are going to be able to pump the field of view up beyond even where we are right now.

One of the issues with going at a larger field of view -- we're already at a fairly low resolution in terms of pixels per degree -- most of our vision is focused out here [gestures to the sides of his field of view in real life]. Let's say that you wanted to up the field of view to 200 degrees. Not are you cutting the resolution in half, it's actually even worse than that, because you have resolution here that's cut in half, but you're also throwing away all that resolution into the edges where you can't, unfortunately, utilize most of it most of the time. So it's a set of tradeoffs. How much field of view do you have, and what kind of pixels per degree compromise are you trying to make? But, like I said, I have a few tricks.

You're making hardware, which is obviously a lot more challenging, in a certain sense, than making software -- at least in the sense that you do have to finish it, send it off to a factory to have it manufactured and put into boxes, probably in Asia.

PL: And it takes a lot of time to do all of that. So software, you can update till the day you release, and then the day after that. You can't do that with hardware.

These days, you can actually do deals for manufacturing that are a lot more agile than what was possible even five years ago. But how is that working for you?

PL: It's working well. It's just one of the realities of hardware, that you have to lock months in advance so that you can start manufacturing the hardware, shipping it over, getting it in boxes, and potentially getting it on shelves. It just takes a very long time and it is much harder than software in that way.