Facebook is releasing public code and formally open sourcing its DeepFocus research into ultra-realistic visuals for VR headsets.

The DeepFocus approach to rendering visuals would produce “natural blur” by way of a neural network architecture that maintains “the ultrasharp image resolutions necessary for high-quality VR,” according to the company. Facebook Reality Lab is the new name for Facebook’s Oculus research teams working on VR and AR concepts that could take years to realize commercially. One such project related to DeepFocus was revealed earlier this year — the Half Dome varifocal hardware prototype which physically moves the panels of a VR headset to produce visuals that would seem to solve the “vergence-accommodation conflict” which plagues current designs. This “conflict” is between where the eyes are pointed and where the lenses of the eyeballs are focused and it can limit the amount of time some people can wear a VR headset without feeling some kind of discomfort.

At the Oculus developer conference in September, Facebook Reality Labs Chief Scientist Michael Abrash talked a bit about some of these research efforts.

A research paper presented at SIGGRAPH Asia this month details the DeepFocus approach and explains how it can be applied not only to a varifocal architecture like Half Dome, but it also “supports high-quality image synthesis for multifocal and light-field displays.” From today’s Oculus blog post:

. . .though we’re currently using DeepFocus with Half Dome, the system’s deep learning–based approach to defocusing is hardware agnostic. Our research paper shows that in addition to rendering real-time blur on varifocal displays, DeepFocus supports high-quality image synthesis for multifocal and light-field displays. This makes our system applicable to the entire range of next-gen head-mounted display technologies that are widely seen as the future of more advanced VR.

Half Dome is one of the most interesting hardware projects Facebook’s research teams revealed publicly. Unfortunately, though, the effort didn’t make an appearance at Oculus Connect 5 in September with co-founder Brendan Iribe’s exit from the company revealed just a few weeks later. Amid a report his departure was related to the future direction of PC-based headsets, we wondered if the open sourcing of this related project might indicate Facebook ceased its research into Half Dome.

According to a Facebook spokesperson, research is continuing with Half Dome and DeepFocus, and this open sourcing effort is intended to “accelerate development in this area to benefit the industry as a whole.”

“Facebook Reality Lab is pursuing many ‘feature prototypes’ to explore the potential future for VR immersion – Half Dome is one of those,” according to the spokesperson. “The Display Systems Research (DSR) team at FRL continues to develop advanced display technologies, including DeepFocus, to explore the visual frontier of VR/AR. Half Dome and many other feature prototypes are constantly under development at FRL.”

According to the Oculus blog post today, researcher Salah Nouri “joined the project to help demonstrate that DeepFocus could actually run on Half Dome and render real-time blur on present-day processors at a resolution fit for VR.” From the post again:

Nouri was able to demo DeepFocus and Half Dome on a four-GPU machine—a significantly more powerful setup than what consumers currently have available but still a major technical feat. We needed to be very careful about parallelizing the work between the four GPUs, so that the memory transfers between them are pipelined in such a way that they don’t introduce any extra latency and have virtually zero compute cost,” says Nouri.

The extreme GPU requirements for this technology means it would likely be costly to reach the broad consumer market. However, neural networks could be optimized for lower end hardware and we note that Facebook has been hiring more people to work on custom silicon chips — an area that could help lower overall system cost to the end user.

Update: Story updated after publication with additional context as well as details shared in the Oculus blog post.

Update 2: The licensing agreement for DeepFocus specifies it grants the right to “reproduce and share the licensed material. . .for noncommercial purposes only.”