This is part 15 of a tutorial series about rendering. In the previous installment, we added fog. Now we'll create our own deferred lights.

From now on, the Rendering tutorials are made with Unity 5.6.0. This Unity version changes a few things in both the editor and shaders, but you should still be able to find your way.

Now that we know that it works, enable HDR again.

LDR colors are logarithmically encoded, using the formula 2 -C . To decode this, we have to use the formula -log 2 C.

The light buffer itself is made available to the shader via the _LightBuffer variable.

To make the second pass work, we have to convert the data in the light buffer. Like our fog shader, a full-screen quad is drawn with UV coordinates that we can use to sample the buffer.

Unfortunately, the frame debugger doesn't show any information about the stencil buffer, neither its contents nor how passes use it. Maybe this will be added in a future version.

When rendering in LDR mode, you might see the sky turn black too. This can happen in the scene view or the game view. If the sky turns black, the conversion pass doesn't correctly use the stencil buffer as a mask. To fix this, explicitly configure the stencil settings of the second pass. We should only render when we're dealing with a fragment that's not part of the background. The appropriate stencil value is provided via _StencilNonBackground .

But what about that second pass? Remember that when HDR is disabled, light data is logarithmically encoded. A final pass is needed to reverse this encoding. That's what the second pass is for. So if you disabled HDR for the camera, the second pass of our shader will also be used, once.

Unity now accepts our shader and uses it to render the directional light. As a result, everything becomes black. The only exception is the sky. The stencil buffer is used as a mask to avoid rendering there, because the directional light doesn't affect the background.

After switching to our shader, Unity complains that it doesn't have enough passes. Apparently, a second pass is needed. Let's just duplicate the pass that we already have and see what happens.

Each deferred light is rendered in a separate pass, modifying the colors of the image. Effectively, they're image effects, like our deferred fog shader from the previous tutorial . Let's start with a simple shader that overwrites everything with black.

All objects in the scene are rendered to the G-buffers with our own shader. But the lights are rendered with Unity's default deferred shader, which is named Hidden / Internal-DefferedShader. You can verify this by going to the graphics settings via Edit / Project Settings / Graphics and switching the Deferred shader mode to Custom shader.

To test the lights, I'll use a simple scene with its ambient intensity set to zero. It is rendered with a deferred HDR camera.

We added support for the deferred rendering path in Rendering 13, Deferred Shading . All we had to do was fill the G-buffers. The lights were rendered later. The tutorial briefly explained how those lights were added by Unity. This time, we'll render these lights ourselves.

Directional Lights

The first pass takes care of rendering the lights, so it's going to be fairly complicated. Let's create an include file for it, named MyDeferredShading.cginc. Copy all code from the pass to this file.

#if !defined(MY_DEFERRED_SHADING) #define MY_DEFERRED_SHADING #include "UnityCG.cginc" … #endif

Then include MyDeferredShading in the first pass.

Pass { Cull Off ZTest Always ZWrite Off CGPROGRAM #pragma vertex VertexProgram #pragma fragment FragmentProgram #pragma exclude_renderers nomrt #include "MyDeferredShading.cginc" ENDCG }

Because we're supposed to add light to the image, we have to make sure that we don't erase what's already been rendered. We can do so by changing the blend mode to combine the full source and destination colors.

Blend One One Cull Off ZTest Always ZWrite Off

We need shader variants for all possible light configurations. The multi_compile_lightpass compiler directive creates all keyword combinations that we need. The only exception is HDR mode. We have to add a separate multi-compile directive for that.

#pragma exclude_renderers nomrt #pragma multi_compile_lightpass #pragma multi_compile _ UNITY_HDR_ON

Although this shader is used for all three light types, we'll first limit ourselves to directional lights only.

G-Buffer UV Coordinates We need UV coordinates to sample from the G-buffers. Unfortunately, Unity doesn't supply light passes with convenient texture coordinates. Instead, we have to derive them from the clip-space position. To do so, we can use the ComputeScreenPos , which is defined in UnityCG . This function produces homogeneous coordinates, just like the clip-space coordinates, so we have to use a float4 to store them. struct Interpolators { float4 pos : SV_POSITION; float4 uv : TEXCOORD0; }; Interpolators VertexProgram (VertexData v) { Interpolators i; i.pos = UnityObjectToClipPos(v.vertex); i.uv = ComputeScreenPos(i.pos); return i; } In the fragment program, we can compute the final 2D coordinates. As explained in Rendering 7, Shadows, this has to happen after interpolation. float4 FragmentProgram (Interpolators i) : SV_Target { float2 uv = i.uv.xy / i.uv.w; return 0; }

World Position When we created our deferred fog image effect, we had to figure out the fragment's distance from the camera. We did so by shooting rays from the camera through each fragment to the far plane, then scaling those by the fragment's depth value. We can use the same approach here to reconstruct the fragment's world position. In the case of directional lights, the rays for the four vertices of the quad are supplied as normal vectors. So we can just pass them through the vertex program and interpolate them. struct VertexData { float4 vertex : POSITION; float3 normal : NORMAL; }; struct Interpolators { float4 pos : SV_POSITION; float4 uv : TEXCOORD0; float3 ray : TEXCOORD1; }; Interpolators VertexProgram (VertexData v) { Interpolators i; i.pos = UnityObjectToClipPos(v.vertex); i.uv = ComputeScreenPos(i.pos); i.ray = v.normal; return i; } We can find the depth value in the fragment program by sampling the _CameraDepthTexture texture and linearizing it, just like we did for the fog effect. UNITY_DECLARE_DEPTH_TEXTURE(_CameraDepthTexture); … float4 FragmentProgram (Interpolators i) : SV_Target { float2 uv = i.uv.xy / i.uv.w; float depth = SAMPLE_DEPTH_TEXTURE(_CameraDepthTexture, uv); depth = Linear01Depth(depth); return 0; } However, a big difference is that we supplied rays that reached the far plane to our fog shader. In this case, we are supplied with rays that reach the near plane. We have to scale them so we get rays that reach the far plane. This can be done by scaling the ray so its Z coordinate becomes 1, and multiplying it with the far plane distance. depth = Linear01Depth(depth); float3 rayToFarPlane = i.ray * _ProjectionParams.z / i.ray.z; Scaling this ray by the depth value gives us a position. The supplied rays are defined in view space, which is the camera's local space. So we end up with the fragment's position in view space as well. float3 rayToFarPlane = i.ray * _ProjectionParams.z / i.ray.z; float3 viewPos = rayToFarPlane * depth; The conversion from this space to world space is done with the unity_CameraToWorld matrix, which is defined in ShaderVariables. float3 viewPos = rayToFarPlane * depth; float3 worldPos = mul(unity_CameraToWorld, float4(viewPos, 1)).xyz;

Reading G-Buffer Data Next, we need access to the G-buffers to retrieve the surface properties. The buffers are made available via three _CameraGBufferTexture variables. sampler2D _CameraGBufferTexture0; sampler2D _CameraGBufferTexture1; sampler2D _CameraGBufferTexture2; We filled these same buffers in the Rendering 13, Deferred Shader tutorial. Now we get to read from them. We need the albedo, specular tint, smoothness, and normal. float3 worldPos = mul(unity_CameraToWorld, float4(viewPos, 1)).xyz; float3 albedo = tex2D(_CameraGBufferTexture0, uv).rgb; float3 specularTint = tex2D(_CameraGBufferTexture1, uv).rgb; float3 smoothness = tex2D(_CameraGBufferTexture1, uv).a; float3 normal = tex2D(_CameraGBufferTexture2, uv).rgb * 2 - 1;

Computing BRDF The BRDF functions are defined in UnityPBSLighting, so we'll have to include that file. //#include "UnityCG.cginc" #include "UnityPBSLighting.cginc" Now we only need three more bits of data before we can invoke the BRDF function in our fragment program. First is the view direction, which is found as usual. float3 worldPos = mul(unity_CameraToWorld, float4(viewPos, 1)).xyz; float3 viewDir = normalize(_WorldSpaceCameraPos - worldPos); Second is the surface reflectivity. We derive that from the specular tint. It's simply the strongest color component. We can use the SpecularStrength function to extract it. float3 albedo = tex2D(_CameraGBufferTexture0, uv).rgb; float3 specularTint = tex2D(_CameraGBufferTexture1, uv).rgb; float3 smoothness = tex2D(_CameraGBufferTexture1, uv).a; float3 normal = tex2D(_CameraGBufferTexture2, uv).rgb * 2 - 1; float oneMinusReflectivity = 1 - SpecularStrength(specularTint); Third, we need the light data. Let's start with dummy lights. float oneMinusReflectivity = 1 - SpecularStrength(specularTint); UnityLight light; light.color = 0; light.dir = 0; UnityIndirect indirectLight; indirectLight.diffuse = 0; indirectLight.specular = 0; Finally, we can compute the contribution of the light for this fragment, using the BRDF function. indirectLight.specular = 0; float4 color = UNITY_BRDF_PBS( albedo, specularTint, oneMinusReflectivity, smoothness, normal, viewDir, light, indirectLight ); return color ;

Configuring the Light Indirect light is not applicable here, so it remains black. But the direct light has to be configured so it matches the light that's currently being rendered. For a directional light, we need a color and a direction. These are made available via the _LightColor and _LightDir variables. float4 _LightColor, _LightDir; Let's create a separate function to setup the light. Simply copy the variables into a light structure and return it. UnityLight CreateLight () { UnityLight light; light.dir = _LightDir; light.color = _LightColor.rgb; return light; } Use this function in the fragment program. UnityLight light = CreateLight() ; // light.color = 0; // light.dir = 0; Light from the wrong direction. We finally get lighting, but it appears to come from the wrong direction. This happens because _LightDir is set to the direction in which the light is traveling. For our calculations, we need the direction from the surface to the light, so the opposite. light.dir = -_LightDir ; Directional light, without shadows.

Shadows In My Lighting, we relied on the macros from AutoLight to determine the light attenuation caused by shadows. Unfortunately, that file wasn't written with deferred lights in mind. So we'll do the shadow sampling ourselves. The shadow map can be accessed via the _ShadowMapTexture variable. sampler2D _ShadowMapTexture; However, we cannot indiscriminately declare this variable. It is already defined for point and spotlight shadows in UnityShadowLibrary, which we indirectly include. So we should not define it ourselves, except when working with shadows for directional lights. #if defined (SHADOWS_SCREEN) sampler2D _ShadowMapTexture; #endif To apply directional shadows, we simply have to sample the shadow texture and use it to attenuate the light color. Doing this in CreateLight means that the UV coordinates have to be added to it as a parameter. UnityLight CreateLight ( float2 uv ) { UnityLight light; light.dir = -_LightDir; float shadowAttenuation = tex2D(_ShadowMapTexture, uv).r; light.color = _LightColor.rgb * shadowAttenuation ; return light; } Pass the UV coordinates to it in the fragment program. UnityLight light = CreateLight( uv ); Directional light with shadows. Of course this is only valid when the directional light has shadows enabled. If not, the shadow attenuation is always 1. float shadowAttenuation = 1; #if defined(SHADOWS_SCREEN) shadowAttenuation = tex2D(_ShadowMapTexture, uv).r; #endif light.color = _LightColor.rgb * shadowAttenuation;

Fading Shadows The shadow map is finite. It cannot cover the entire world. The larger an area it covers, the lower the resolution of the shadows. Unity has a maximum distance up to which shadows are drawn. Beyond that, there are no real-time shadows. This distance can be adjust via Edit / Project Settings / Quality. Shadow distance quality setting. When shadows approach this distance, they fade out. At least, that's what Unity's shaders do. Because we're manually sampling the shadow map, our shadows get truncated when the edge of the map is reached. The result is that shadows get sharply cut off or are missing beyond the fade distance. Large and small shadow distance. To fade the shadows, we must first know the distance at which they should be completely gone. This distance depends on how the directional shadows are projected. In Stable Fit mode the fading is spherical, centered on the middle of the map. In Close Fit mode it's based on the view depth. The UnityComputeShadowFadeDistance function can figure out the correct metric for us. It has the world position and view depth as parameters. It will either return the distance from the shadow center, or the unmodified view depth. UnityLight CreateLight (float2 uv , float3 worldPos, float viewZ ) { UnityLight light; light.dir = -_LightDir; float shadowAttenuation = 1; #if defined(SHADOWS_SCREEN) shadowAttenuation = tex2D(_ShadowMapTexture, uv).r; float shadowFadeDistance = UnityComputeShadowFadeDistance(worldPos, viewZ); #endif light.color = _LightColor.rgb * shadowAttenuation; return light; } The shadows should begin to fade as they approach the fade distance, completely disappearing once they reach it. The UnityComputeShadowFade function calculates the appropriate fade factor. float shadowFadeDistance = UnityComputeShadowFadeDistance(worldPos, viewZ); float shadowFade = UnityComputeShadowFade(shadowFadeDistance); What do these functions look like? They are defined in UnityShadowLibrary. The unity_ShadowFadeCenterAndType variable contains the shadow center and the shadow type. The _LightShadowData variable's Z and W components contain the scale and offset used for fading. float UnityComputeShadowFadeDistance (float3 wpos, float z) { float sphereDist = distance(wpos, unity_ShadowFadeCenterAndType.xyz); return lerp(z, sphereDist, unity_ShadowFadeCenterAndType.w); } half UnityComputeShadowFade(float fadeDist) { return saturate(fadeDist * _LightShadowData.z + _LightShadowData.w); } The shadow fade factor is a value from 0 to 1, which indicates how much the shadows should fade away. The actual fading can be done by simply adding this value to the shadow attenuation, and clamping to 0–1. float shadowFade = UnityComputeShadowFade(shadowFadeDistance); shadowAttenuation = saturate(shadowAttenuation + shadowFade); To make this work, supply the world position and view depth to CreateLight in our fragment program. The view depth is the Z component of the fragment's position in view space. UnityLight light = CreateLight(uv , worldPos, viewPos.z ); Fading shadows.

Light Cookies Another thing that we have to support are light cookies. The cookie texture is made available via _LightTexture0 . Besides that, we also have to convert from world to light space, so we can sample the texture. The transformation for that is made available via the unity_WorldToLight matrix variable. sampler2D _LightTexture0; float4x4 unity_WorldToLight; In CreateLight , use the matrix to convert the world position to light-space coordinates. Then use those to sample the cookie texture. Let's use a separate attenuation variable to keep track of the cookie's attenuation. light.dir = -_LightDir; float attenuation = 1; float shadowAttenuation = 1; #if defined(DIRECTIONAL_COOKIE) float2 uvCookie = mul(unity_WorldToLight, float4(worldPos, 1)).xy; attenuation *= tex2D(_LightTexture0, uvCookie).w; #endif … light.color = _LightColor.rgb * (attenuation * shadowAttenuation ) ; Directional light with cookie. The results appear good, except when you pay close attention to geometry edges. Artifacts along edges. These artifacts appear when there is a large difference between the cookie coordinates of adjacent fragments. In those cases, the GPU chooses a mipmap level that is too low for the closest surface. Aras Pranckevičius figured this one out for Unity. The solution Unity uses is to apply a bias when sampling mip maps, so we'll do that too. attenuation *= tex2Dbias (_LightTexture0, float4( uvCookie , 0, -8) ).w; Biased cookie sampling.