This is the fourth part of a tutorial series about rendering. The previous part was about combining textures. This time we'll look at how to compute lighting.

This tutorials was made using Unity 5.4.0b17.

Normals

We can see things, because our eyes can detect electromagnetic radiation. Individual quanta of light are known as photons. We can see a part of the electromagnetic spectrum, which is know to us as visible light. The rest of the spectrum is invisible to us.

What's the entire electromagnetic spectrum? The spectrum is split into spectral bands. From low to high frequency, these are known as radio waves, microwaves, infrared, visible light, ultraviolet, X-rays, and gamma rays.

A light source emits light. Some of this light hits objects. Some of this light bounces off the object. If that light then ends up hitting our eyes – or the camera lens – then we see the object.

To work this all out, we have to know our object's surface. We already know its position, but not its orientation. For that, we need the surface normal vectors.

Using Mesh Normals Duplicate our first shader, and use that as our first lighting shader. Create a material with this shader and assign it to some cubes and spheres in the scene. Give the objects different rotations and scales, some non-uniform, to get a varied scene. Shader "Custom/My First Lighting Shader" { … } Some cubes and spheres. Unity's cube and sphere meshes contain vertex normals. We can grab them and pass them straight to the fragment shader. struct VertexData { float4 position : POSITION; float3 normal : NORMAL; float2 uv : TEXCOORD0; }; struct Interpolators { float4 position : SV_POSITION; float2 uv : TEXCOORD0; float3 normal : TEXCOORD1; }; Interpolators MyVertexProgram (VertexData v) { Interpolators i; i.uv = TRANSFORM_TEX(v.uv, _MainTex); i.position = mul(UNITY_MATRIX_MVP, v.position); i.normal = v.normal; return i; } Now we can visualize the normals in our shader. float4 MyFragmentProgram (Interpolators i) : SV_TARGET { return float4(i.normal * 0.5 + 0.5, 1) ; } Normal vectors as colors. These are the raw normals, directly from the mesh. The faces of the cubes appear flat, because each face is a separate quad with four vertices. The normals of these vertices all point in the same direction. In contrast, the vertex normals of the spheres all point in different directions, resulting in a smooth interpolation.

Dynamic Batching There is something strange going on with the cube normals. We'd expect each cube to show the same colors, but this is not the case. Even weirder, the cubes can change color, depending on how we look at them. Color-changing cubes. This is caused by dynamic batching. Unity dynamically merges small meshes together, to reduce draw calls. The meshes of the spheres are too large for this, so they aren't affected. But the cubes are fair game. To merge meshes, they have to be converted from their local space to world space. Whether and how objects are batched depends, among other things, on how they are sorted for rendering. As this conversion affects the normals as well, this is why we see the colors change. If you want to, you can switch dynamic batching off via the player settings. Batching settings. Besides dynamic batching, Unity can also do static batching. This works differently for static geometry, but also involves a conversion to world space. It happens at build time. Normals, without dynamic batching. While you need to be aware of dynamic batching, it's nothing to be worried about. In fact, we have to do the same thing for our normals. So you can leave it enabled.

Normals in World Space Except for dynamically batched objects, all our normals are in object space. But we have to know the surface orientation in world space. So we have to transform the normals from object to world space. We need the object's transformation matrix for that. Unity collapses an object's entire transformation hierarchy into a single transformation matrix, just like we did in part 1. We could write this as `O = T_1 T_2 T_3 …` where `T` are the individual transformations and `O` is the combined transformation. This matrix is known as the object-to-world matrix. Unity makes this matrix available in shaders via a float4x4 unity_ObjectToWorld variable, which is defined in UnityShaderVariables. Multiply this matrix with the normal in the vertex shader to transform it to world space. And because it's a direction, repositioning should be ignored. So the fourth homogeneous coordinate must be zero. Interpolators MyVertexProgram (VertexData v) { Interpolators i; i.position = mul(UNITY_MATRIX_MVP, v.position); i.normal = mul(unity_ObjectToWorld, float4(v.normal, 0)) ; i.uv = TRANSFORM_TEX(v.uv, _MainTex); return i; } Alternatively, we can multiply with only the 3 by 3 part of the matrix. The compiled code ends up the same, because the compilers will eliminate everything that gets multiplied with the constant zero. i.normal = mul( (float3x3) unity_ObjectToWorld, v.normal ); Going from object to world space. The normals are now in world space, but some appear brighter than others. That's because they got scaled as well. So we have to normalize them after the transformation. i.normal = mul(unity_ObjectToWorld, float4(v.normal, 0)); i.normal = normalize(i.normal); Normalized normals. While we have normalized vectors again, they look weird for objects that don't have a uniform scale. That's because when a surface gets stretched in one dimension, its normals don't stretch in the same way. Scaling X, both vertices and normals by ½. When the scale is not uniform, it should be inverted for the normals. That way the normals will match the shape of the deformed surface, after they've been normalized again. And it doesn't make a difference for uniform scales. Scaling X, vertices by ½ and normals by 2. So we have to invert the scale, but the rotation should remain the same. How can we do this? We described an object's transformation matrix as `O = T_1 T_2 T_3 …` but we can be more specific than that. We know that each step in the hierarchy combines a scaling, rotating, and positioning. So each `T` can be decomposed into `S R P`. This means that `O = S_1 R_1 P_1 S_2 R_2 P_2 S_3 R_3 P_3 …` but let's just say `O = S_1 R_1 P_1 S_2 R_2 P_2` to keep it short. Because normals are direction vectors, we don't care about repositioning. So we can shorten it further to `O = S_1 R_1 S_2 R_2` and we only have to consider 3 by 3 matrices. We want to invert the scaling, but keep the rotations the same. So we want a new matrix `N = S_1^-1 R_1 S_2^-1 R_2`. How do inverse matrices work? The inverse of a matrix `M` is written as `M^-1`. It is a matrix that will undo the operation of another matrix when they are multiplied. Each is the inverse of the other. So `M M^-1 = M^-1 M = I`. To undo a sequence of steps, you have to perform the inverse steps in reverse order. The mnemonic for this involves socks and shoes. This means that `(A B)^-1 = B^-1 A^-1`. In the case of a single number `x`, its inverse is simply `1/x`, because `x/x = 1`. This also demonstrates that zero has no inverse. Neither does every matrix have an inverse. We're working with scaling, rotating, and repositioning matrices. As long as we're not scaling by zero, all these matrices can be inverted. The inverse of a reposition matrix is made by simply negating the XYZ offset in it's fourth column. `[[1,0,0,x],[0,1,0,y],[0,0,1,z],[0,0,0,1]]^-1 = [[1,0,0,-x],[0,1,0,-y],[0,0,1,-z],[0,0,0,1]]` The inverse of a scaling matrix is made by inverting its diagonal. We only need to consider the 3 by 3 matrix. `[[x,0,0],[0,y,0],[0,0,z]]^-1 = [[1/x,0,0],[0,1/y,0],[0,0,1/z]]` Rotation matrices can be considered one axis at a time, for example around the Z axis. A rotation by `z` radians can be undone by simply rotating by `-z` radians. When you study the sine and cosine waves, you'll notice that `sin (-z) = -sin z` and `cos (-z) = cos z`. This makes the inverse matrix simple. `[[cos z, -sin z, 0],[sin z, cos z, 0],[0,0,1]]^-1 = [[cos z, sin z, 0],[-sin z, cos z, 0],[0,0,1]]` Notice that the rotation inverse is the same as the original matrix flipped across its main diagonal. Only the signs of the sine components changed. Besides the object-to-world matrix, Unity also provides an object's world-to-object matrix. These matrices are indeed inverses of each other. So we also have access to `O^-1 = R_2^-1 S_2^-1 R_1^-1 S_1^-1`. That gives us the inverse scaling that we need, but also gives us the inverse rotations and a reversed transformation order. Fortunately, we can remove those unwanted effects by transposing the matrix. Then we get `(O^-1)^T = N`. What is the transpose of a matrix? The transpose of a matrix `M` is written as `M^T`. You transpose a matrix by flipping its main diagonal. So its rows become columns, and its columns become rows. Note that this means that the diagonal itself is unchanged. `[[1,2,3],[4,5,6],[7,8,9]]^T = [[1,4,7],[2,5,8],[3,6,9]]` Like inversion, transposing a sequence of matrix multiplications reverses its order. `(A B)^T = B^T A^T`. This makes sense when working with matrices that aren't square, otherwise you could end up with invalid multiplications. But it's true in general, and you can look up the proof for it. Of course flipping twice gets you back where you started. So `(M^T)^T = M`. Why does transposing produce the correct matrix? First, notice that `R^-1 = R^T`, as observed above. This leads to `O^-1 = R_2^-1 S_2^-1 R_1^-1 S_1^-1 = R_2^T S_2^-1 R_1^T S_1^-1`. Now let's transpose `(O^-1)^T = (S_1^-1)^T (R_1^T)^T (S_2^1)^T (R_2^T)^T = (S_1^-1)^T R_1 (S_2^-1)^T R_2`. Next, notice that `S^T = S`, because these matrices have zeros everywhere, except along their main diagonal. This leads to `(O^-1)^T = S_1^-1 R_1 S_2^-1 R_2 = N`. So let's transpose the world-to-object matrix and multiply that with the vertex normal. i.normal = mul( transpose((float3x3)unity_WorldToObject) , v.normal ); i.normal = normalize(i.normal); Correct world-space normals. Actually, UnityCG contains a handy UnityObjectToWorldNormal function that does exactly this. So we can use that function. It also does it with explicit multiplications, instead of using transpose . That should result in better compiled code. Interpolators MyVertexProgram (VertexData v) { Interpolators i; i.position = mul(UNITY_MATRIX_MVP, v.position); i.normal = UnityObjectToWorldNormal(v.normal); i.uv = TRANSFORM_TEX(v.uv, _MainTex); return i; } What does UnityObjectToWorldNormal look like? Here it is. The inline keyword doesn't do anything, in case you're wondering. // Transforms normal from object to world space inline float3 UnityObjectToWorldNormal( in float3 norm ) { // Multiply by transposed inverse matrix, // actually using transpose() generates badly optimized code return normalize( unity_WorldToObject[0].xyz * norm.x + unity_WorldToObject[1].xyz * norm.y + unity_WorldToObject[2].xyz * norm.z ); }