We have implemented a new WebGL architecture and support for the multiview extension in Servo that will make our WebGL and WebVR much faster. The Multiview extension allows for stereo rendering in a single pass and can improve VR rendering performance up to 40%. We implemented it as a WebVR 1.1 extension and it’s compatible with all the mobile headsets (Google Daydream, Samsung Gear VR, and Snapdragon VR).

We’ve been also working on a multiview enabled render path in Three.js and plan to land it upstream, once the extension is standardized. This will allow for great optimizations to everyone using A-Frame or raw Three.js.

Additionally, we have improved the Servo VR build scripts. In parallel, the Servo team continues to work on embedding APIs and they are going to build a high level drop-in replacement for the Android WebView component.

WebGL architecture redesign

The new Servo WebGL architecture brings render path optimizations, improved source code organization and testability, better synchronization, faster compilation times, and a flexible design to get the best out of new features such as multiview.

The new WebGL render path reduces the steps and latency for each WebGL call to hit the driver and improves the memory footprint for creating new WebGL based canvases. All the WebGL implementation has been moved to it’s own component in Servo, instead of sharing the same code base as WebRender, speeding up the development cycle.

The new component gets rid of fixed IpcSender<T> types and relies on using a variety of trait types and some templating in the main WebGLThread struct. This enables to switch GL threading and command queuing models using cargo features at compile time or using runtime preferences (e.g. use a more performant command queue when multiprocess is not enabled or enable straight multiview rendering to the FBOs exposed by VR headsets).

All WebGL calls are by default queued and sent to the WebGL GPU process. This provides the best security and parallelism because the JavaScript thread does not have access to the GPU and JS code can be run ahead while running heavy GL stuff in a different thread or process.

We also toyed with an experimental WebGL threading model which used in-script GPU thread and totally sync GPU calls. This approach allows for less parallelism but provides the most optimized VR latency when the render tick can be run within a safe frame time. Some WebVR browsers introduce extra frames of latency partly due to remoting GL commands, which may be very noticeable in VR. We want to make these kind of optimizations configurable for packaging WebGL/WebVR applications. When you package trusted source code some validations, error checking, and security rules that the spec enforces could be relaxed in favor of performance and latency boosts.

Multiview architecture

The new WebGL architecture, combined with the existing cross-platform rust-webvr library provided a solid base for Multiview integration into Servo.

Our first step was to implement multiview enabled VR FBOs in rust-webvr library. Under the hood, it uses OVR_multiview extension, which lets to bind a texture array to an FBO and reuse a single GPU command buffer for stereo rendering in a single render pass. Once the extension is active, the gl_ViewID_OVR built-in variable can be used in vertex or fragment shaders to render the specifics for each eye/view:

For the Servo integration we decided to directly expose VR framebuffers provided by the headsets using the opaque multiview framebuffers approach proposed in the WEBGL multiview draft. This enables the use of multiview in WebGL 1.0 as long as the WebGL implementation allows GLSL 300 version (which isn’t true in all browsers). We updated our Angle dependency in order to support the OVR_multiview shader validations but it didn’t work correctly and we had to submit some Angle patches upstream for correct transpilations.

Servo supports multiview rendering straight to the headsets (e.g. Daydream GVR context). This render path doesn’t require any texture copy in the rendering process of a frame, which improves memory footprint and latency.

The technical part was solved but there wasn’t a clean way to expose multiview for WebVR in JavaScript with the current status of the specs:

WebVR 1.1 API is not multiview friendly because the API uses a side by side rendered canvas element.

API is not multiview friendly because the API uses a side by side rendered canvas element. WebVR 2.0 API includes multiview support but it’s still under heavy churn and with a “do not implement” status.

For now we opted to include part of the WebVR 2.0 WebGLFramebuffer API in WebVR 1.1 using an ad-hoc extension method vrDisplay.getViews() . We adapted an official WebVR sample to test the API. This is the entry point:

All the native work we are doing will be reused when we implement the WebVR 2.0 API. That also applies to the efforts we are making to add multiview support in Three.js. We are using opaque WebGLFramebuffers, which will make all the contributions totally compatible with the WebVR 2.0 spec.

We used a webvr.info sample to measure multiview impact in our WebVR implementation. We changed it to use duplicated draw calls to make it more CPU bound and test a more extreme case. We plan to do a lot more detailed comparisons using Three.js, once all the patches are ready. From our measurements, you can expect up to 40% improvements in CPU bound applications:

Conclusions

We love to save draw calls and squeeze performance. Multiview will be a performance booster for WebVR, improving the quality of VR experiences in the browser. We are also really glad to help the WebVR community by contributing the multiview support to Three.js.

We will keep improving the WebGL and WebvR implementations in Servo. We will soon start adding AR capabilities and improve Firefox by sharing our optimizations under the Quantum project. Ah! And we've already kicked off the Servo WebGL 2.0 implementation ;)