One of virtual reality’s (VR) hot topics of the past few weeks has been SensoMotoric Instruments’ (SMI) collaborations with other hardware companies, including head-mounted display (HMD) manufacturers and GPU giants NVIDIA. At SIGGRAPH 2016, the latter is presenting the technology embedded in HMDs for the very first time, and VRFocus had been hands-on to examine exactly what eye-tracking can mean for VR now and in the future.

The first point of order is to mention FOVE: a HMD that has been designed specifically with eye-tracking technology in mind. While FOVE offers a good enough example of the technology the HMD itself is lacking in comparison to the Oculus Rift and HTC Vive, both in terms of content (though it should be noted that FOVE hasn’t yet seen consumer release) and comfort. Sadly, SMI’s technology suffers from many of the same issues as FOVE – in that it’s not refined enough to accommodate all eye shapes and sizes without issue – however it’s NVIDIA’s graphical demonstration that offers perhaps the best showcase of what the technology is capable of yet seen.

Sitting central in a virtual representation of a typical North American elementary school classroom, the user is taken through a demonstration that compares a render without foveation, with temporally stable foveation and finally with contrast-preserving foveation. The classroom, littered with tongue-in-cheek notifications of NVIDIA’s products and technologies, is a very clean and crisp design with very high quality visuals. Using modern VR HMDs, there are no issues with latency or framerate in the demonstration. However, it’s still very apparent what benefits foveated rendering can offer.

Switching from a standard render to the temporally stable foveation immediately adds a blur effect to the peripheral vision. The eye-tracking technology separates your central field of view from the head-look of the HMD and onto the movement of your pupils; a little disconcerting to begin with given how acclimatised to VR technology VRFocus has become, but arguably significantly closer to real-world vision once that initial hurdle has passed. The problem here is with refractoring and artefacts: the user will often find the edges of elements in the field behaving abnormally when not in the central field of view.

Contrast-preserving foveation is intended to combat exactly that. Here, a real-time algorithm determines what is central and what is highlighted due to colour or light source, lowering the quality in detail of an object without redefining it as a wholly lowered priority. This, according to NVIDIA, will significantly reduce rendering costs without a noticeable loss in visual quality.

What this means for VR is that less GPU performance is spent rendering elements not immediately within the centre of the user’s field-of-view, freeing up more of the computational power to allow for more detail in that centralised area. However, while this sounds like a wonderful step forward the cost of SMI’s technology at present is significantly prohibitive. Furthermore, the eye-tracking hardware is mounted underneath the lenses in the HMD, requiring the user to send their device to SMI or have significant engineering knowledge of their own.

In the future however, as the modern wave of VR has already proven, the lowering costs of technology could well allow for eye-tracking of this nature to become the norm. Showcased here on GeForce GTX 1080 graphics cards, SMI’s technology already has obvious benefits; further down the line, in a generation-or-two of HMDs and GPUs, the technology here could be integrated as standard similar to head-tracking or head-related transfer function (HRTF) audio, which would surely act as a significant landmark not just for VR, but for computer graphics fidelity as a whole.