We are big fans of mixed reality here at UploadVR. It is a great way of showing what people in VR are doing.

Startups like Owlchemy Labs and LIV are attempting to make the capture process easier while pushing for higher quality, but current approaches are all limited by one major roadblock. The most expressive part of the human body, the face, is mostly blocked during capture. You largely have to imagine the expressions of people as they interact with a virtual world.

Google, however, showed off some impressive research that takes the technology to the next level. Using a collection of techniques, including a modified HTC Vive with SMI eye tracking, Google digitally recreates your face in place of the VR headset that is blocking it.

This work is the result of an “ongoing collaboration” between Research, Daydream Labs and the YouTube teams at Google. According to a blog post diving into the research, here is how it works:

The core idea behind our technique is to use a 3D model of the user’s face as a proxy for the hidden face. This proxy is used to synthesize the face in the MR video, thereby creating an impression of the headset being removed. First, we capture a personalized 3D face model for the user with what we call gaze-dependent dynamic appearance. This initial calibration step requires the user to sit in front of a color+depth camera and a monitor, and then track a marker on the monitor with their eyes. We use this one-time calibration procedure — which typically takes less than a minute — to acquire a 3D face model of the user, and learn a database that maps appearance images (or textures) to different eye-gaze directions and blinks. This gaze database(i.e. the face model with textures indexed by eye-gaze) allows us to dynamically change the appearance of the face during synthesis and generate any desired eye-gaze, thus making the synthesized face look natural and alive.

Here’s a video showing the approach in action: