In this blog series, we will go over every aspect of the creation of our demo Book of the Dead. Today, we will focus on our partnership with Quixel, our wind system, scene building and content optimization tricks. This is the fourth blog in our ‘Making Of’ blog series. In case you missed it take a look back at the last three posts that go through the creative process for characters, concept art and photogrammetry assets, trees, and VFX within Book of the Dead.

Hi! My name is Julien Heijmans, I work as an Environment Artist as part of the Unity Demo team. I only joined Unity last year, but I have around 7 years of experience in the video game industry. This blog post will provide you some insight into the production of Book of the Dead from my perspective, the perspective of a content creator and an environment artist.

I am kind of new in the work of photogrammetry assets, but I remember clearly the day Quixel announced the creation of Megascans several years ago. Ever since, I’ve been eager to get an opportunity to work with their assets. Joining Unity’s Demo team made that happen, as I started to work on The Book of the Dead.

If you want to start experimenting with the tools discussed in this blog you can download the Book of the Dead: Environment project now.

Download the project

Partnership with Quixel

When I joined the project I realized that we were not only using assets from Quixel’s Megascans library, but that Unity and Quixel were partnering together for the creation of this project.

During the production process, the Demo Team created a list of the assets that they would need and Quixel would capture new assets if they were missing an appropriate match in their existing library. Many of those assets were vegetation such as grass, plants, and bushes, that require proper equipment and setup to scan.

Quixel did not only provide us with texture sheets for those assets, but they also created the geometry, with their LODs and vertex color setup to support our wind shader.

Between the released Book of the Dead: Environment project, and the unreleased assets used in the teaser, we received over 50 assets of high quality and of complexity that would have seen us struggle to make our deadlines with the few artists we have on the team.

During the production, we could get the assets pretty quickly into the engine, and looking good. We would often tweak the textures (mostly the albedo, tweaking the brightness/levels/curve and often tweaking the colors to unify them across the scene), repack them properly, tweak a bit the LODs to the level we want, assign the textures to a new HDRP Lit material, and we would be done with it.

Luckily Quixel has recently released a tool, Megascans Bridge, that would do most of the importing work that we did manually. It saves time in repacking textures for HDRP and the likes.

For those who are interested in more Megascans assets, here’s a reminder that there are several Megascans collections on the Unity Asset Store. All the assets are ready to be imported into a project setup with the High Definition Render Pipeline or the Lightweight Render Pipeline.

Wind

The creation of a wind system for vegetation assets and its whole pipeline is always a tricky process. There are many different kinds of vegetation assets that would need to be animated in different ways; two different trees might require completely different setup and different shader complexity.

For this reason, our team decided to create a custom vertex shader based procedural animation for the wind effect on our vegetation assets. We made it tailored to work with our specific project and the trees or bushes it contains. Allowing us to have a complete control over it.

Torbjorn Laedre, our Tech Lead built a shader that would support several different types of vegetation, using 3 different techniques:

Hierarchy Pivot, for our trees and some plants with a very defined structure/hierarchy

Single Pivot, for grass, small plants and for large bush with undefined structure/hierarchy

Procedural Animation, for vegetation assets where pivots cannot be predicted.

The trees were the more complex assets to prepare, on the content side, they are using the Hierarchy Pivot type of animation and they rely on 3 distinct levels of hierarchy:

Trunk, that rests on the ground.

Branches Level A, that are connected to the trunk.

Branches Level B, that are connected to the branches of Level A.

The shader needs to know the level of hierarchy and the pivot of every single vertex of the tree. I first had to author the geometry of the tree itself, and then assign the level of hierarchy for every polygon of the tree using the green vertex color channel.

A value of 0 for the green channel of the vertex color would signify that it is the trunk

A value between 0 and 1 would be the branches level A

A value of 1 would be the branches level B

I did this using Autodesk Maya, with some small scripts I was able to set up all of the LODs of an asset in 10-15 minutes.

In addition to this, we also used what we called a ‘Flutter Mask’. They are texture masks that would help determine where in the geometry the pivot of the branch be. We used this for the branches that used hard alpha textures for geometry. Here is an illustration of this mask.

With all this information prepared, I could use the C# script that would input my tree prefab, and generate a new prefab with the pivot information of every vertex baked in. After adding a WindControl object to my scene, I can import my tree in the scene, and start playing with the material properties.

﻿

You can see that each hierarchy level has a range property (basically the length of the trunk, or branches) and an elasticity property.

There are also some properties to set up wind flutter animation. They add a bit of procedural noise to the vertex positions, to imitate the vibration of the branches when the wind blows on them.

Last, but not least, we had to make the wind sound FX influence the wind animation. The volume of the sound is driving the wind strength of the animation. It is really surprising how a simple idea can add to the project. If you have not done it already, you should open the project and walk around. You will notice the trees and all the grass around shaking when you hear large gusts of wind hit your surroundings.

Layout

When targeting the level of detail and density of a project like Book of the Dead, it was important for me to think about how I was going to structure the level, to avoid performance issues later in production. One of the things I tried to be careful about, was to limit long view distances in the scene. You can do that by placing ‘corridors’ and ‘bottlenecks’ in the layout of the scene.

Those layouts, together with assets correctly set up as ‘Occluder static’ and ‘Occludee static’ flags will make Unity’s occlusion culling more efficient.

This video shows the Occlusion Culling Visualization, and you can easily guess where the camera is looking at from the top view. Around the end of the video, I enable/disable the occlusion culling, and see what objects are being culled by the occlusion culling.

You will also be able to see that some objects are not culled, those are mostly the really tall trees, some over 25 meters tall, that have a very large bounding box and are therefore hard to cull behind the cliffs.

Use of Unity legacy terrains

When the trailer was released, we saw comments that there’s no way we use the legacy terrain system. But that’s exactly what we use, and we modified the HD Render Pipeline’s Layered Lit shader to support it.

The HDRP Layered Shader allows blending of layers using their heightmap texture, so the result is better than the linear blend that comes with the legacy terrain shader.

This is, of course, a temporary solution, and not properly integrated in the UI. To change the terrain you will need to edit the material that is applied to it, instead of using the ‘Edit Texture’ button in the Paint Texture tab of the terrain object.

If you want to create a new terrain and apply different textures on it, you will need to duplicate this TerrainLayeredLit material and assign it to your new terrain. You will also need to create those 4 textures sets in the Paint Texture tab. The textures assigned in there won’t be used for rendering the terrain, but they will allow you to paint the different layers on your terrain. It is also there that you can change the tiling properties of the different layers.

Also, to be able to fully use the LODGroup feature, all of the assets placed through the terrain are setup as Trees, and not detail assets.

But actually, this project has a really high amount of assets scattered on the ground: grass, bushes, plans, wooden twigs, rocks, etc. With all of this, the terrain can be fairly simple, you can see below that in this particular shot the terrain is just a simple tiling material.

Scattered detail assets

When you walk around the level, you will notice in places a very large amount of small twigs and pinecones scattered on the ground.

Those are not really that obvious when you simply walk around the level, but they really bring the level of detail of the scene when you start looking at the ground. There are sometimes hundreds of tiny twigs on the ground, between rocks and dead trunks, just like they would eventually rest if they fell down from trees. Placing these by hand would be simply impossible, it is for this reason that Torbjorn Laedre made a tool to help us scatter those small details in the level.

The twigs are simple cutout planes with an alpha material. We added physics capsule colliders to them.

The script will first spawn the desired quantity of those scatter objects around a transform position, and then simulate physics for them to fall down on the ground, colliding with the terrain and all the others assets (rock, dead trunks, etc). Then, by pressing the button ‘Bake’, they will be stripped of their colliders, merged into a single object, and assigned a LODGroup with a specific distance at which they should be culled.

This script is used by objects called ‘UberTreeSpawner’ in the scene, and you are free to use it as you wish.

Side note about this tool: For the twigs and other scattered objects to fall properly on the ground and other assets, you will need to have quite high-density mesh colliders on all the assets in the scene. At the same time, you don’t want those heavy colliders to be used when the game is running. For this reason, most of the assets in the scene have two different colliders: One light to be used at real-time in play mode by the PlayerController with the Default Layer assigned. And one used exclusively for the physics simulation of those twigs, with the ‘GroundScatter’ Layer assigned.

Lighting

The Book of the Dead: Environment project is using baked indirect global illumination with real-time direct lighting.

Both the indirect lighting from the sun and direct plus indirect lighting from the sky is baked into lightmaps and light probes. Reflection probes, occlusion probes and other sources of occlusion are baked as well. Direct sun contribution, on the other hand, is real-time lighting. Shading in the HD Render Pipeline looks best when using real-time direct light, and it also gives us some freedom to animate the rotation, intensity and color temperature of the directional light at runtime.

Since the indirect lighting is baked, we cannot change too much the intensity and color of the directional light, or it won’t match anymore with the baked lighting. We wouldn’t be able to get away with a full day/night cycle in this setup, even though a forest is a quite forgiving environment in terms of obscuring mismatched indirect lighting.

Baked lightmaps are used mostly for the terrain and a few other assets, but we preferred to use a combination of light probes and occlusion probes for all the rocks and cliffs in the project, as they provide better results for objects with sharp angles and crisp normal maps.

Occlusion Probes

Lighting a dense forest is something tricky to achieve in real-time. Trees, with all their leaves and branches, have a huge surface area and complex geometry, so it’s not practical to cover them with lightmaps. Using a single light probe per tree would give it uniform lighting from the bottom to the top. Light Probe Proxy Volumes are closer to what we would want, but it’s not practical to crank up the grid resolution to capture fine details.

For that reason that our Senior Graphics Programmer, Robert Cupisz, developed the occlusion probes.

From an artist’s point of view, it’s a really nice and easy feature to use: you simply add the object to the scene, and it displays a volume gizmo that you need to scale for it to cover the area you want, and then setup its resolution parameters in X, Y, and Z.

It also allows you to create ‘Detail’ occlusion probes if you want some area of the scene to have a higher density of probes. Once it is set up, you will need to bake the lighting of the whole scene. The occlusion probes will be baked during that process.

Each probe in the 3D grid samples sky visibility by shooting rays in the upper hemisphere, and stores it as an 8bit value going from fully occluded 0 to fully visible 1. This gives us darker areas wherever there’s a higher concentration of leaves and branches – even more so when a few trees are clustered together.

Probes unlucky enough to have landed inside trunks or rocks will be fully black. To avoid that darkness from leaking out, they are marked as invalid and overwritten by neighboring valid probes.

Since the probes sample how much of the sky is visible, they should only attenuate direct sky contribution. For this reason, the lightmapper is set up to exclude the direct light contribution from regular light probes, and then probe lighting is composed as light probe plus direct sky probe occluded by occlusion probes.

This way we can have tons of cheap occlusion probes sampling small details of how foliage occludes the sky, bring depth to the image, and very few more expensive light probes sampling slower changing indirect light.

If you want to have a clearer picture of how they affect the scene, you can also use the SkyOcclusion Debug view.

The occlusion probe API for baking occlusion probes and excluding direct sky contribution from light probes has been added to Unity 2018.1, and all the scripts and shaders are available in the project.

Atmospheric Scattering

We ported and re-used the Atmospheric Scattering solution that we originally developed for the Blacksmith demo.

Our Senior Programmer Lasse Jon Fuglsang Pedersen has extended it to make use of temporal supersampling, resulting in a much smoother look.

HD Render Pipeline Transmission

The HD Render Pipeline default Lit Shader supports several types of diffusion. It allows you to have materials with sub-surface scattering, or—like used for all our vegetation in this project—a more simple translucent material with only light transmission.

This effect is set up in two different locations:

On the material you need to choose the ‘Translucent’ material type, input a Thickness map, and choose a diffusion profile, which is the second location:

The diffusion profile settings, where you can edit all the other parameters of your transmission effect

Note: Our team added additional sliders to control separately the direct and the indirect transmission to have more control over the final result. But this change is not respecting the PBR rules and thus will not make it into the HD Render Pipeline.

Area Volumes

The Area Volumes are built on the core volume system offered by SRP and are very similar to the Post Process Volumes. Their function is to drive object properties depending on the position of the Main Camera object.

Several objects, including the Directional Light, the Atmospheric Scattering, Auto Focus and the WindControl have their properties driven by Area Volumes, so if you want to change the current lighting setup, for example, you will need to do that in the corresponding Area Volume. Those Area Volumes objects are located in the main scene, under _SceneSettings > _AREASETTINGS, and have the suffix ‘_AV’.

Debug window

For those who have not used the HD Render Pipeline much, there is now a specific SRP debug window that you can open through the menu Window > General > Render Pipeline Debug

With this, you will be able to see individual GBuffer layers, lighting components or specific texture maps from your materials, or even override albedo/smoothness/normal. It is a really useful tool when you have some objects that are not rendering correctly or any other visual bug. It will help you pinpoint the source of the issue a lot faster.

The best part if that is that those debug views are generated automatically from your shaders, and coders are able to create new debug views quite easily.

I even used those debug views to create the tree billboards that are used in the background of the scene. I just placed my assets on an empty scene and took screenshots with the albedo, roughness, normal gbuffer layers visible, and used those to create my texture maps.

Optimization

While a big part of the optimization resides on the code side, it is also important that your assets and scenes are set up properly if you want to have a decent framerate. Here are some of the ways the content was optimized for this project:

All our materials are using GPU Instancing.

We are using LODs for most of the assets in this scene, this is a must-have.

The LOD Crossfade feature is great, it allows the have a nice and smooth blending between the different Level of Details of you object. But this feature is quite heavy and can really increase the draw call count in your project. For this reason, we disabled it on as many assets as possible.

To avoid noticeable transition between LODs, we started using Object Space normal maps on many of our large rock and cliff assets.

Note: Using Object Space normal map instead of Tangent Space normal map will reduce the precision of the normal map. It is actually not very noticeable on our assets that are very rough and noisy, but you probably don’t want to use it for hard surface assets.

While it is important to limit view distance by the way the scene is built, and by using occlusion culling, it is also worth knowing that many of the draw calls used to render your scene are actually coming from the rendering each cascade of your shadow maps (more specifically from the directional light in our project).

We had a lot of draw calls coming from the small vegetation assets scattered on the terrain, hundreds and hundreds of them in some locations. We achieved a nice reduction of draw calls by creating larger patches of those grass and plant assets. Instead of having hundreds of them, we would then have only 15-20.

Note that this has an impact on visual quality, with such large assets, it becomes really hard to avoid having the grass clipping with rocks and other assets placed on the ground.

Note that this has an impact on visual quality, with such large assets, it becomes really hard to avoid having the grass clipping with rocks and other assets placed on the ground. We are using layer culling, that is a feature already in Unity but does not have any UI. This feature allows you to cull objects that are assigned to a specific layer, depending on the distance they are from the camera. Torbjorn has extended this feature to be able to also cull the shadow casting of those objects at a different distance. For example, most of our small vegetation assets stop casting shadows at a distance of around 15 meters, which is not very noticeable given the amount of noise with the grass and other plants on the ground, and then they are completely culled at around 25 meters – no matter how their LODGroup are set up.

—

Stay tuned for the next blog post in the series. We’ll be exploring the work that went into creating the shading, lighting, post-processing, and more from the Book of the Dead.

If you couldn’t make it to Unite Berlin, we’ll soon be releasing Julien Heijmans’s presentation about environment art in the demo. You can follow our YouTube channel to keep up to date on when that video is released.

More information on Book of the Dead