Okay let's grab avid PC-gamer from 1994 and show him PC games of 2004 — 10 years later.

Imagine going from 320x200 to 1024x768 basically 10 times as many pixels.

First Doom to Doom 3, from first Need for Speed to Need for Speed: Underground 2.Not to mention World of Warcraft, Halo, GTA: San Andreas, and all of these other wonderful games we have played.

To describe his level of surprise "shock" or "awe" just isn't going to cut it. Everything in 3D graphics changed to unrecognizable levels. From flat maps, no physics and textures sizes of a few kilobytes and general picture build of small number of rectangles we went to games with photo realistic landscapes of incredibly complex geometry.

He might actually kill you to play these games.

That how much change there was.

Now let's try the same trick with gamer from 2008 and take him to 2018.

And he'll like a lot of things: a little more detail to eye, skin, grass up close and so on. But anything more than 5 feet away will look pretty much the same.

To him it'll be like: "okay what's your point, who cares".

Why is that?

To understand this we have to remember the nature of human progress in almost any industry. It's going in S-shaped curves. First slow growth at the bottom of S-curve and then gradually speed up to the point of rapid growth — middle section of S-curve. And then gradual decline at technology matures. For 3D video games slow growth was from 1960s or 1970s to around 1990. Rapid growth was from roughly 1990 to 2010. And now we entered the state of slow growth again as technology matures.But at the same time we are also in another S-shaped curve. VR-technology curve.Looks something like this:

And right about now we are just entering the knee of the curve. And wheres in 3D graphics key technology was GPU-accelerated graphics, key technology for VR is eye tracking and foveated rendering and we about to see an incredible change in VR-technology just like gamers from 90s saw incredible changes in 3D graphics.

To understand foveated rendering we first need to understand how human eye works. Our eye can clearly see only about 1.5°-2° degrees of field of view. To test this try distinguish any letter just 3 words to the left or right of where you looking right now without reverting your gaze.

In fact according to this human eye lose acuity by 50% every 2.5° degrees away from the center. And past 30° degrees it loses acuity even faster.

But what's an average vision? According to this source 20/15 vision is considered an average vision for humans and translates to roughly 80 horizontal pixels per horizontal degree. And only about 1% or 2% of tested 18-year olds have 20/10 vision or 120 PPD (Pixels Per Degree).

To compare it to modern TV is to just calculate number of horizontal pixels per degree. So if we view our TV with horizontal angle of view of 30°, then 1920/30 = 64 pixels per degree for 1080p.

Or 128 pixels per degree for 4k resolution.

Or roughly 21 ppd for 640x480.

Let's calculate Oculus Rift/HTC Vive ppd.

Compared to regular displays VR headsets have incredibly wide FOV, roughly 110 degrees, so a lot of sharpness is lost.

So for Rift/Vive ppd values would be: 1200/110 = ~11 ppd if horizontal resolution per eye 1200 and horizontal fov 110.

Windows Mixed reality headsets mostly have 105 fov and 1440x1440 resolution.

So 1440/105 = ~13.7 ppd.

Or if we take Samsung Odyssey with slightly wider FOV: 1600/110 = ~14.5 ppd. However vertical ppd would actually be lower in Samsung Odyssey, so on average visually it's almost the same.

In other words we have a long way to go before we reach visual clarity of modern displays in VR.

Now let's try to fantasize a little bit. Let's build perfect VR headset with resolution indistinguishable from reality to most people. And our perfect VR headset of course would have eye tracking and foveated rendering.

Just to be clear — eye tracking systems work in such a way that you can't "outrun" them with your eye. As soon as your eye moves to gaze at another point of view, new detailed frame would be drawn there instead of low detailed periphery one before your visual cortex could tell the difference.

Just for the sake of "perfect" let's make our resolution in our fovea 2° center as high as 160 ppd. It's slightly overkill, but in fantasy world we don't care. So 2.5° away from the center our human eye acuity is only half as good so we can roughly halve our resolution around our circle 5° in diameter. And then every 5° we can just keep halving resolution until it's around 5 ppd at which point lowering it further could hurt detecting smooth motion in our periphery or very high contrast objects, like moon and stars.

Now let's calculate how many pixels per degree we would need to draw:

center to 5° — 160 ppd

5° to 10° — 80 ppd

10° to 15° — 40 ppd

15° to 20° — 20 ppd

20° to 25° — 10 ppd

25° to 30° — let's stop at 5 pixels per degree from here on out.

So entire field of view except circle with 25° degrees in diameter would be drawn with 5 ppd resolution. Each eye natural horizontal FOV is ~155°. So in our fantasy VR we should also have perfect fov 155 degrees each eye, because why not? So it's roughly circle 155*5 or 775 pixels in diameter per eye excluding 25° center area.

And that smaller center area would have 160*5°+80*5°+40*5°+20*5°+10*5° pixels per side to render. Or circle with 1550 pixels in diameter.

So in our scenario even though we're using values likely twice higher than minimum realistic vision require we only need to render smaller center vision circle 1550 pixels in diameter and larger periphery circle 775 in diameter. So it's only ~470'000 pixels (area of circle with 775 pixels in diameter) pixels minus area of circle 125 pixels in diameter or roughly ~12'300 pixels (area of circle that covers 25° center area) or roughly 460'000 pixels for periphery vision.

And roughly 2'000'000 pixels for high ppd center area.

Which comes to roughly to 2.5 megapixels per eye! In other words we need to render only 5 megapixels total to archive nearly perfect vision in VR! Which is about the same as just 1600x1600 resolution!

So with eye tracking and foveated rendering we can play the same modern VR games with the same modern GPUs at an incredible level of visual clarity basically indistinguishable from reality!

To really appreciate this try building the same level of detail headset without foveated rendering.

That would be resolution of 25'000 x 25'000 to render, or 615 megapixels per eye!

That would require whooping 100-200 times more GPU processing power to render!

And so given significantly reduced GPU requirements one can only imagine future possibility of this tech for things like smart glasses. Or standalone VR headsets.

And now that we got all excited about VR future I want to take a small step back: yes it is exciting for sure, but right now we pretty far from making a headset with such a high native resolution to get there. Display technology is not just there yet. But I for one is very optimistic about our eye tracking foveated rendering VR future.

