This is the seventh part of a tutorial series about rendering. The previous part covered normal mapping. Now we'll take a look at shadows.

This tutorial was made with Unity 5.4.0f3.

Directional Shadows

While our lighting shader produces fairly realistic results by now, it evaluates each surface fragment in isolation. It assumes that a ray of light from every light source eventually hits every fragment. But this is only true if those rays aren't blocked by something.

Some light rays get blocked.

When an object sits in between a light source and another object, it might prevent part or all of the light rays from reaching that other object. The rays that illuminate the first object are no longer available to illuminate the second object. As as result, the second object will remain at least partially unlit. The area that is not lit lies in the shadow of the first object. To describe this, we often say that the fist object casts a shadow on the second one.

In reality, there is a transition region between fully lit and fully shadowed space, know as the penumbra. It exists because all light sources have a volume. As a result, there are regions where only part of the light source is visible, which means that they are partially shadowed. The larger the light source, and the further away a surface is from its shadow caster, the larger this region is.

Shadow with penumbra.

Unity doesn't support penumbra. Unity does support soft shadows, but that is a shadow filtering technique, not a simulation of penumbra.

Enabling Shadows Without shadows, it is hard to see the spatial relationships between objects. To illustrate this, I created a simple scene with a few stretched cubes. I placed four rows of spheres above these cubes. The middle rows float of spheres, while the outer rows are connected to the cubes below them via cylinders. The objects have Unity's default white material. The scene has two directional lights, the default directional light, and a slightly weaker yellow light. These are the same lights used in previous tutorials. Currently, the shadows are disabled project-wide. We did that in an earlier tutorial. The ambient intensity is also set to zero, which makes it easier to see the shadows. Two directional lights, no shadows, no ambient light. Shadows are part of the project-wide quality settings, found via Edit / Project Settings / Quality. We'll enable them at a high quality level. This means supporting both hard and soft shadows, using a high resolution, a stable fit projection, a distance of 150, and four cascades. Shadow quality settings. Make sure that both lights are set to cast soft shadows. Their resolution should depend on the quality settings. Shadow settings per light. With both directional lights casting shadows, the spatial relationships between all objects becomes a lot clearer. The entire scene has become both more realistic and more interesting to look at. Scene with shadows.

Shadow Mapping How does Unity add these shadows to the scene? The standard shader apparently has some way to determine whether a ray is blocked or not. You could figure this out whether a point lies in a shadow, by casting a ray through the scene, from the light to the surface fragment. If that ray hits something before it reaches the fragment, then it is blocked. This is something that a physics engine could do, but it would be very impractical to do so for each fragment, and per light. And then you'd have to get the results to the GPU somehow. There are a few techniques to support real-time shadows. Each has it advantages and disadvantages. Unity uses the most common technique nowadays, which is shadow mapping. This means that Unity stores shadow information in textures, somehow. We'll now investigate how that works. Open the frame debugger via Window / Frame Debugger, enable it, and look at the hierarchy of rendering steps. Look at the differences between a frame without and a frame with shadows enabled. Rendering process without vs. with shadows. When shadows are disabled, all objects are rendered as usual. We were already familiar with this process. But when shadows are enabled, the process becomes more complex. There are a few more rendering phases, and quite a lot more draw calls. Shadows are expensive!

Rendering to the Depth Texture When directional shadows are enabled, Unity begins a depth pass into the rendering process. The result is put into a texture that matches the screen resolution. This pass renders the entire scene, but only records the depth information of each fragment. This is the same information that is used by the GPU to determine whether a fragment ends up on top or below a previously rendered fragment. This data corresponds with a fragment's Z coordinate, in clip space. This is the space that defines the area that the camera can see. The depth information ends up stored as a value in the 0–1 range. When viewing the texture, nearby texels appear dark. The further away a texel is, the lighter it becomes. Depth texture, with camera near plane set to 5. What is clip space? It is the space that determines what the camera sees. When you select the main camera in the scene view, you will see a pyramid wire frame in front of it, which indicates what it can see. Camera view, with large near plane value. In clip space, this pyramid is a regular cube. The model-view-projection matrix is used to convert mesh vertices to this space. It is known as clip space, because everything that ends up outside of this cube gets clipped, because it isn't visible. This information actually has nothing to do with shadows directly, but Unity will use it in a later pass.

Rendering to Shadow Maps The next thing Unity renders is the shadow map of the first light. A little later, it will rendered the shadow map of the second light as well. Again, the entire scene is rendered, and again only the depth information is stored in a texture. However, this time the scene is rendered from the point of the view of the light source. Effectively, the light acts as a camera. This means that the depth value tells us how far a ray of light traveled before it hit something. This can be used to determine if something is shadowed! What about normal maps? The shadow maps record the depth of the actual geometry. Normals maps add the illusion of a rough surface, and shadow maps ignore them. Thus, shadows are not affected by normal maps. Because we're using directional lights, their cameras are orthographic. As such, there is no perspective projection, and the exact position of the light's camera doesn't matter. Unity will position the camera so it sees all objects that are in view of the normal camera. Two shadow maps, each with four viewpoints. Actually, it turns out that Unity doesn't just render the entire scene once per light. The scene is rendered four times per light! The textures are split into four quadrants, each being rendered to from a different point of view. This happens because we have opted to use four shadow cascades. If you were to switch to two cascades, the scene would be rendered twice per light. And without cascades, it is only rendered once per light. We will see why Unity does this when we look at the quality of shadows.

Collecting Shadows We have the depth information of the scene, from the point of view of the camera. We also have this information from the point of view of each light. Of course this data is stored in different clip spaces, but we know the relative positions and orientations of these spaces. So we can convert from one space to the other. This allows us to compare the depth measurements from both points of view. Conceptually, we have two vectors that should end up at the same point. If they do, both the camera and light can see that point, and so it is lit. If the light's vector ends before reaching the point, then the light is blocked, which means that the point is shadowed. What about when the scene camera can't see a point? Those points are hidden behind other points that are closer to the camera. The scene's depth texture only contains the closest points. As a result, no time is wasted on evaluating hidden points. Screen-space shadows, per light. Unity creates these textures by rendering a single quad that covers the entire view. It uses the Hidden/Internal-ScreenSpaceShadows shader for this pass. Each fragment samples from the scene's and light's depth textures, makes the comparison, and renders the final shadow value to a screen-space shadow map. Lit texels are set to 1, and shadowed texels are set to 0. At this point Unity can also perform filtering, to create soft shadows. Why does Unity alternate between rendering and collecting? Each light needs its own screen-space shadow map. But the shadow map rendered from the light's point of view can be reused.

Sampling the Shadow Maps Finally, Unity is finished rendering shadows. Now the scene is rendered normally, with one change. The light colors are multiplied by the values stored in their shadow maps. This eliminates the light when it should be blocked. Every fragment that gets rendered samples the shadow maps. Also fragments that end up hidden behind other objects that are drawn later. So these fragments can end up receiving the shadows of the objects that end up hiding them. You can see this when stepping through the frame debugger. You can also see shadows appear before the objects that actually cast them. Of course these mistakes only manifest while rendering the frame. Once it is finished, the image is correct. Patially rendered frame, containing strange shadows.

Shadow Quality When the scene is rendered from the light's point of view, the orientation does not match the scene camera. So the texels of the shadow maps don't align with the texels of the final image. The resolution of the shadow maps also ends up being different. The resolution of the final image is determined by the display settings. The resolution of the shadow maps is determined by the shadow quality settings. When the texels of the shadow maps end up rendered larger than those of the final image, they will become noticeable. The edges of the shadows will be aliased. This is most obvious when using hard shadows. Hard vs. soft shadows. To make this as obvious as possible, change the shadow quality settings so we only get hard shadows, at the lowest resolution, with no cascades. Low quality shadows. It is now very obvious that the shadows are textures. Also, bits of shadow are appearing in places where they shouldn't. We'll look into that later. The closer the shadows get to the scene camera, the larger their texels become. That's because the shadow map currently covers the entire area visible to the scene camera. We can increase the quality close to the camera, by reducing the area that is covered by shadows, via the quality settings. Shadow distance reduced to 25. By limiting shadows to an area close to the scene camera, we can use the same shadow maps to cover a much smaller area. As a result, we get better shadows. But we lose the shadows that are further away. The shadows fade away as they approach the maximum distance. Ideally, we get high-quality shadows up close, while also keeping the shadows that are far away. Because far away shadows end up rendered to a smaller screen area, those could make do with a lower-resolution shadow map. This is what shadow cascades do. When enabled, multiple shadow maps are rendered into the same texture. Each map is for use at a certain distance. Low resolution textures, with four cascades. When using four cascades, the result looks a lot better, even though we're still using the same texture resolution. We're just using the texels much more efficiently. The downside is that we now have to render the scene three more times. When rendering to the screen-space shadow maps, Unity takes care of sampling from the correct cascade. You can find where one cascade ends and another begins, by looking for a sudden change of the shadow texel size. You can control the range of the cascade bands via the quality settings, as portions of the shadow distance. You can also visualize them in the scene view, by changing its Shading Mode. Instead of just Shaded, use Miscellaneous / Shadow Cascades. This will render the colors of the cascades on top of the scene. Cascade regions, adjusted to show three bands. How do I change the scene view's display mode? There is a dropdown list at the top left of the scene view window. By default, it is set to Shaded. The shape of the cascade bands depends on the Shadow Projection quality setting. The default is Stable Fit. In this mode, the bands are chosen based on the distance to the camera's position. The other option is Close Fit, which uses the camera's depth instead. This produces rectangular bands in the camera's view direction. Close fit. This configuration allows for more efficient use of the shadow texture, which leads to higher-quality shadows. However, the shadow projection now depends on the position and orientation or the camera. As a result, when the camera moves or rotates, the shadow maps change as well. If you can see the shadow texels, you'll notice that they move. This effect is known as shadow edge swimming, and can be very obvious. That's why the other mode is the default. Shadow swimming. Don't Stable Fit shadows also depend on the camera position? They do, but Unity can align the maps so that when the camera position changes, the texels appear motionless. Of course the cascade bands do move, so the transition points between the bands change. But if you don't notice the bands, you also don't notice that they move.

Shadow Acne When we used low quality hard shadows, we saw bits of shadow appear where they shouldn't. Unfortunately, this can happen regardless of the quality settings. Each texel in the shadow map represents the point where a light ray hit a surface. However, texels aren't single points. They end up covering a larger area. And they are aligned with the light direction, not with the surface. As a result of this, they can end up sticking in, through, and out of surfaces like dark shards. As parts of the texels end up poking out of the surfaces that cast the shadow, the surface appears to shadow itself. This is known as shadow acne. Shadow map causes acne. Another source of shadow acne is numerical precision limitations. These limitations can cause incorrect result when very small distances are involved. Severe acne, when using no biases at all. One way to prevent this problem is by adding a depth offset when rendering the shadow maps. This bias is added to the distance from the light to the shadow casting surface, pushes the shadows into the surfaces. Biased shadow map. The shadow bias is configured per light, and is set to 0.05 by default. Shadow settings per light. A low bias can produce shadow acne, but a large bias introduces another problem. As the shadow-casting objects are pushed away from the lights, so are their shadows. As a result, the shadows will not be perfectly aligned with the objects. This isn't so bad when using a small bias. But too large a bias can make it seem like shadows are disconnected from the objects that cast them. This effect is known as peter panning. Large bias causes peter panning. Besides this distance bias, there is also a Normal Bias. This is a subtler adjustment of the shadow casters. This bias pushes the vertices of the shadow casters inwards, along their normals. This also reduces self-shadowing, but it also makes the shadows smaller and can cause holes to appear in the shadows. What are the best bias settings? There are no best settings. Unfortunately, you'll have to experiment. Unity's default settings might work, but they can also produce unacceptable results. Different quality settings can also produce different results.