Real-time ray tracing is upon us. Electronic Arts leads the charge in integrating the technology on an engine level.

This video series features Sebastien Hillaire, Senior Rendering Engineer at EA/Frostbite. Sebastian discusses real-time raytracing and global illumination (GI) workflows in Frostbite. He explains the context and the current workflows Frostbite uses, then describes the process of GI Live preview tech in Frostbite using DX12 and DXR API on top of NVIDIA RTX technology).

We know it’s hard to set aside an hour to watch a recorded talk, so we’ve broken Sebastien’s talk into four segments that can each be watched in seven minutes or less.

Part 1: Current Global Illumination in EA Games (5:55 min)

To understand where we’re going with in-game lighting, it’s useful to take a quick look back at how we got to where we currently are. In this video, Sebastien explains the scene lighting processes EA has been using to make FIFA’s stadiums and Star Wars Battlefront 2’s planet environments look fantastic.

Five Key Things from Part 1:

The Frostbite Flux path tracer allows for fully precomputed lighting and a focus on density, quality and performance.

The CPU baking process in the Flux path tracer uses Intel Embree and Incredibuild.

The Flux workflow starts with authoring in the FrostEd editor, then moves to baking 50k to 500k rays/texel, which can take minutes to hours. A final check process verifies that the final lightmap irradiance volume looks as expected.

Tracing is parallel; each lightmap texel is independent, and each path is independent. This is perfect for GPUs.

Of course, one ray is not enough. You have to integrate multiple rays, and that can be done using Monte Carlo integration, which allows for incremental refinement.

Part 2: Live Authoring of a Scene (4:15 min)

Next, Sebastien provides tips on how to run Flux and Frostbite smoothly at the same time (hint: use two GPUs), and he shows some live examples of authoring a scene.

Five Key Things From Part 2:

What components are necessary for effective live authoring for Flux DXR input? You need meshes (triangles, lightmap UVs) and materials (albedo, translucency, and emissive), as well as every light type and sky in Frostbite.

Flux outputs lightmaps (irradiance and dominant indirect light direction) and irradiance volumes.

Flux DXR differs in some ways when compared to the CPU bake process used today. The plan is ultimately to move to GPU baking.

When setting up DXR, you can use a single GPU, but Frostbite and Flux compete for it. It’s best to use two GPUS instead.

With two GPUs set up, you’ll want to run asynchronous mode for best results. Each process can run as fast as possible (Frostbite and Flux).

Part 3: Tracing Lightmap Texel (4:43 min)

In part 3, Sebastien describes how samples are acquired for lightmapping, why a certain operation might invalidate the lightmap, and how direct light should be managed. He also gives some pointers on translucency support.

Seven Key Things from Part 3:

You accumulate samples over time. When the artist makes a material change (example: adjusts a mesh), the process restarts, so GI data may have become invalid.

Secondary shadow rays are incoherent and thus more expensive.

We can use irradiance cache for direct light (sun, local light, emissive).

Cache has the same parameterisation as lightmaps. It fills in over multiple frames when lighting conditions have changed.

Sampling the next event along the path becomes a simple texture fetch.

Translucency support on thin surfaces becomes cheap.

Tracing is expensive; preview only what is visible. Render/update only visible textels.

Part 4: Battling Noise, Irradiance Volumes, and Future Thoughts (6:20 min)

In the final video, Sebastien explains how to battle the noise that is an inevitable part of ray tracing. He also touches on irradiance volumes (looking at performance on Titan V), and offers his thoughts on the future of real-time ray tracing.

Five Key Things from Part 4:

Forward path tracer has a lot of noise (variance)

The Frostbite solver tracks variance per texel in order to know when a texel has converged. This is used for balancing the tracing budget where it matters in the frame and denoising the lightmap.

When denoising the lightmap, use a hierarchical A-Trous filter in lightmap space, apply every frame on the lightmap before it’s presented to the user, and remember that denoising is faded out as texels converge.

In the future, we will want to mix image and lightmap space denoiser for reducing light leaks and add converged texel to an indirect irradiance in order to shorten the path.

With irradiance volumes, you trace probe irradiance of visible volumes, store irradiance into Spherical Harmonic L2, and use the same integrator as for texels. Probes use the same integrator but over the full sphere instead of hemisphere.

We hope you’ve gotten value from the this installment in NVIDIA’s Coffee Break series!

If this intrigues you, you can find the full presentation on the GDC Vault . You can also download the presentation .