Ryse’s Advanced Graphics Tech Explained; How it Became a “Visual showcase” for Xbox One

Ryse’s Advanced Graphics Tech Explained; How it Became a “Visual showcase” for Xbox One

Giuseppe Nelva March 26, 2014 8:55 PM EST

When CryTek was tasked with creating visually stunning game for the Xbox One, they found themselves with a problem to solve: how to create an experience that would show off the graphical fidelity of the console on a weaker hardware than their usual PC target?

The technology and solutions they used were explained in depth during a panel at the Game Developers Conference titled “Moving to the Next Generation – The Rendering Technology of Ryse”, hosted by Senior Rendering Engineer Nicolas Schulz.

Here’s what we learned from the panel:

The project had a small team of rendering engineers fully dedicated to the game.

It was designed as a “visual showcase” for the Xbox One from the get go despite the fact that the target hardware was less powerful than the usual PC target CryTek used.

This created a major challenge: How can you still get people excited for next-gen visuals?

Since Crysis 3 was already a visually rich game just adding more visual fidelity wasn’t an option on weaker hardware, and post processing was already maxed in the previous generation. That pushed CryTek to focus on the details instead, like shading, material definition, lighting quality and global illumination effects.

The developer wanted to escape the usual “gamey” look, and to get closer to CG film quality by implementing well recognizable materials, a clean image with no aliasing and soft, realistic lighting.

Physically Based Shading was used for the game, creating an interaction between light and materials similar to the real world. This had considerable implications as it enforced a plausible material model and defined clear rules for assets, enhancing consistency across the board. It also involved More complex BRDFs (bidirectional reflectance distribution function), Fresnel, normalization of specular highlights, energy conservation in general.

The lighting model was especially affected as the developer had to be careful to preserve material integrity.

The Oren-nayar model was used for BDRF, as it takes in account retro-reflection based on surface roughness. It improves quality for rough materials like stone, but it still offers similar results to the traditional Lambertian model for smooth materials.

To enable deferred shading the team started from the code of Crysis 2.

Initially Forward+ rendering with MSAA anti-aliasing was considered, but then the team decided to go with Deferred rendering for most materials and Forward+ only with materials with specific shading requirements like hair and eyes. Forward+ remains an interesting option for future development.

Physically Based Shading is very prone to specular aliasing, and that was fixed by applying a variance filter in screen space, that also helps on thin, highly reflective geometry.

Unfortunately the filter created noticeable outline artifact. It was partly fixed by reducing specular reflectance for dielectrics, but ultimately the team decided that temporal stability was more relevant than the additional artifacts.

Lighting was especially important for the game, and the team implemented a quite complex model:

No analytical light models for direct lighting were used in Ryse, as the scenario doesn’t include artificial light sources.

Indirect lighting was made using localized environment probes augmented by screen space reflections. Ambient lights were used to break uniformity.

There are around 100 probes per level. Specular cubemaps are 256×256 pixels.

To avoid flat ambient, CryTek added multiplicative “ambient light” sources applied on top of probes.

Without ambient lights

With Ambient Lights

Glossy real time local reflections were implemented via screen space reflections, that were first used in the DirectX 11 version of Crysis 2.

They were further evolved in Ryse to work with materials of different roughness.

Simple raytracing is performed to get mirror reflections. Convolved versions of the reflections and alpha are built by repeated downsampling and gaussian filtering. This is cheaper and simpler than Voxel cone tracing and gives slightly lesser results.

Facial rendering was an extremely important part of Ryse, especially due to the high amount of cutscenes, and raising the quality of characters was one of the designated project goals.

It relied largely on general improvements of lighting and shading, with specific solutions implemented for facial features.

The standard BDRF is more advanced than the one used for the Nvidia Human Head demo.

Subsurface Scattering was used to simulate skin translucency. It was optimized for skin, but was also used for marble.

Subsurface Scattering Disabled

Subsurface Scattering Enabled

Skin translucency also shared an unified solutions with foliage, showing the light bleeding through ears and nostrils.

Density and thickness were specified by the artists via translucency maps.

With advanced facial rendering equally advanced hair rendering is required, and CryTek researched the topic intensively.

Kajiya-Kay model was used instead of Marschner because it’s cheaper and still works well, as hair rendering can be very performance intensive.

A direction map specifying the hair tangent was deemed essential for quality.

The main challenge for hair was avoiding aliasing and making it look smooth, especially for individual facial hair and beards.

Alpha tested hair can look wiry, while alpha blended hair is smooth, but has well-known shortcomings, including sorting issues (it blurs with the background), and it requires forward shading.

Using high MSAA anti-aliasing was not feasible on consoles for real-time rendering.

Fully alpha blended hair was selected, and a new “Thin Hair” feature was created. The issues were solved via a “depth fixup” pass, by combining it with approximated alpha tested depth values.

Forward+ rendering was used and shading was applied using a light list generated during tiled shading.

The much discussed rendering resolution and upscaling were also explained:

Scene rendering is done at 900p, which is the sweet spot between quality per pixel and number of pixels

Swapchain backbuffer (all in-game UI and menus) is rendered at 1080p, as text is very prone to upscaling artifacts.

The scene gets upscaled after rendering using a custom upscaling pass, since CryTek hadn’t yet evaluated the Xbox One’s hardware upscaler.

Tiled deferred shading was also used, as normal deferred shading is heavy on resources. This saves 2-5 milliseconds of rendering time in average and a lot more in worst-case scenarios. A single compute shader takes care of the light culling and executes the entire lighting/shading pipeline.

Finally, an enormous static shadow map is generated only once when each level loads or when transitioning to a different area, taking advantage of the increased memory of the Xbox One. It includes all the static objects in the level and avoids re-rendering distant objects with every frame. The shadow map is 8192×8192 16 bit, weighs 128 mb and covers an 1 square kilometer area of the game’s world, providing sufficient resolution. This saves between 40 and 60% draw calls in shadow map passes.

One thing is for sure: despite being a launch game, Ryse is still one of the most visually impressive titles of the generation. It’s no surprise that compromises had to be made here and there, but the results are definitely pleasing. If you want to see how the game looks with and without some of the effects mentioned above, you can check it out here. For further reference you can find the slides of the presentation here.