A few months ago, Justin and I began investigating ways to make our next game (in Unity3D) look great. We knew we wanted to stick with 2D art, but didn’t want to tacitly accept the “conventional” expectations associated with 2D art in games.

SpriteLamp

We discovered SpriteLamp, which essentially allows you to generate dynamic lighting on pixel art. It accomplishes this by producing normal maps, depth maps, and anisotropy maps for use in shaders. All you provide are 2-5 “lighting profiles” of what an object would look like lit from a specific direction (top, bottom, left, right, or front). This animation sort of sells itself:

Courtesy of the SpriteLamp Kickstarter

We encourage anyone interested to check out the Kickstarter or SnakeHillGames for more information. SpriteLamp was successfully funded, and we’ve received beta access. The tool, even in its beta state, is very usable, and has UI that is easy enough to understand for now:

The UI for SpriteLamp

Although we’re not artists, even we could see how exciting this would be to get working in Unity. SpriteLamp’s developer, Finn Morgan, said that a shader for Unity will be provided later, but we decided that we couldn’t wait, so we wrote it ourselves.

Shaders in Unity

For those unfamiliar with how shaders work in Unity, here are some resources that helped out a lot for us:

Another important aspect to keep in mind is that if your shader has errors, it’s easiest to see the errors by viewing the shader file itself in Unity’s inspector window:

Sometimes Unity’s Console window will show all shader errors, but I’ve found the Inspector for the shader to be more reliable.

With all of that in mind, let’s get started. I figure it will be more valuable to talk through the various aspects of the shader, rather than just provide the shader in its entirety (though if you just want that, check the end of the article).

A Bare Bones Cg Shader

Let’s start with the minimal amount of work, to better understand the structure of a Unity shader, since it was pretty overwhelming for me at first, especially as it relates to lighting. If you’re familiar with the structure of shaders and how Unity handles multiple lights, then feel free to jump to the next section.

Shader "Custom/Bare Bones" { Properties { _MainTex ("Diffuse Texture", 2D) = "white" {} } SubShader { AlphaTest NotEqual 0.0 Pass { Tags { "LightMode" = "ForwardBase" } CGPROGRAM #pragma vertex vert #pragma fragment frag #include "UnityCG.cginc" // User-specified properties uniform sampler2D _MainTex; struct VertexInput { float4 vertex : POSITION; float4 uv : TEXCOORD0; }; struct VertexOutput { float4 pos : POSITION; float2 uv : TEXCOORD0; }; VertexOutput vert(VertexInput input) { VertexOutput output; output.pos = mul(UNITY_MATRIX_MVP, input.vertex); output.uv = float2(input.uv); return output; } float4 frag(VertexOutput input) : COLOR { float4 diffuseColor = tex2D(_MainTex, input.uv); return diffuseColor; } ENDCG } Pass { Tags { "LightMode" = "ForwardAdd" } CGPROGRAM #pragma vertex vert #pragma fragment frag #include "UnityCG.cginc" // User-specified properties uniform sampler2D _MainTex; struct VertexInput { float4 vertex : POSITION; float4 uv : TEXCOORD0; }; struct VertexOutput { float4 pos : POSITION; float2 uv : TEXCOORD0; }; VertexOutput vert(VertexInput input) { VertexOutput output; output.pos = mul(UNITY_MATRIX_MVP, input.vertex); output.uv = float2(input.uv); return output; } float4 frag(VertexOutput input) : COLOR { float4 diffuseColor = tex2D(_MainTex, input.uv); return diffuseColor; } ENDCG } } }

This might seem a bit overwhelming already for someone new to shaders in Unity, but it’s a great starting point. Let’s talk about what is going on.

Shader Header

On line 1, we specify the name of the shader, as viewed when selecting a shader for a material. Using slashes gets you nested folders in the shader dropdown. On lines 3-6, the shader properties specify what data you can set outside the shader that will be brought in. See this documentation for more information on shader properties. Since we’re using the new Unity 2D features, _MainTex is required if you’re going to use SpriteRenderer. On line 9, we specify that pixels with an alpha of 0 should be ignored.

ForwardBase Pass

We’re now describing one pass of our shader, referred to as “ForwardBase”. This is where ambient lighting, the first directional light, per-vertex lights, and lights using spherical harmonics are handled. This Unity reference page explains the various Pass tags in more detail, and this page explains how Unity handles multiple lights in shaders.

Then we begin writing our Cg shader, which occurs between CGPROGRAM and ENDCG. We specify the function names that we’ll use for our vertex and fragment shaders. Then we state the data that we’ll bring in from outside the shader. These variables must be named the same as values specified in the Properties section. Next, structs are defined for the data that our vertex shader will receive, and what it will output. The output of the vertex shader is the same data received by the fragment shader (after interpolation of that data occurs). For now, we’re just using the vertex position and texture coordinates.

In our vertex shader, we simply pass the text coordinates through, but we multiply the position by the model*view*projection matrix. This converts the vertex position from object space to screen space. In our fragment shader, we simply get the pixel from the main texture and return that color.

ForwardAdd Pass

This pass is used for per-pixel lights, and it should look really familiar. For now it’s the same, but that will quickly change.

Ambient Lighting in ForwardBase

For our uses, we only wanted to focus on ambient lighting in our ForwardBase pass. However, if you want to add directional lights or per-vertex lights, this Cg shader page will be helpful.

Pass { Tags { "LightMode" = "ForwardBase" } CGPROGRAM #pragma vertex vert #pragma fragment frag #include "UnityCG.cginc" // User-specified properties uniform sampler2D _MainTex; struct VertexInput { float4 vertex : POSITION; float4 color : COLOR; float4 uv : TEXCOORD0; }; struct VertexOutput { float4 pos : POSITION; float4 color : COLOR; float2 uv : TEXCOORD0; }; VertexOutput vert(VertexInput input) { VertexOutput output; output.pos = mul(UNITY_MATRIX_MVP, input.vertex); output.color = input.color; output.uv = float2(input.uv); return output; } float4 frag(VertexOutput input) : COLOR { float4 diffuseColor = tex2D(_MainTex, input.uv); float3 ambientLighting = float3(UNITY_LIGHTMODEL_AMBIENT) * float3(diffuseColor) * float3(input.color); return float4(ambientLighting, diffuseColor.a); } ENDCG }

With this shader pass, we’ve added color to the data we receive in the vertex shader and pass to the fragment shader. The field specified by COLOR correlates to the color set via SpriteRenderer. Factoring that in, along with UNITY_LIGHTMODEL_AMBIENT (which is the ambient light color specified in your project’s render settings), we get ambient-lit sprites that we can further color via SpriteRenderer:

We’re now done with the ForwardBase pass, so all code past this point is focusing on the ForwardAdd pass.

Phong Illumination

Phong illumination (your standard ambient+diffuse+specular lighting) has been described in many places (such as Wikipedia), but it’s an important step to getting SpriteLamp integrated properly in Unity. To prepare for this, we have to add some shader properties:

Properties { _MainTex ("Diffuse Texture", 2D) = "white" {} _SpecColor ("Specular Color", Color) = (1, 1, 1, 1) _Shininess ("Shininess", Float) = 10 }

It’s easy to get the intention of “shininess” reversed. Smoother surfaces have larger values for this, which result in smaller specular highlights.

Pass { Tags { "LightMode" = "ForwardAdd" } Blend One One // additive blending CGPROGRAM #pragma vertex vert #pragma fragment frag #include "UnityCG.cginc" // User-specified properties uniform sampler2D _MainTex; uniform float4 _SpecColor; uniform float _Shininess; uniform float4 _LightColor0; struct VertexInput { float4 vertex : POSITION; float4 color : COLOR; float4 uv : TEXCOORD0; }; struct VertexOutput { float4 pos : POSITION; float4 color : COLOR; float2 uv : TEXCOORD0; float4 posWorld : TEXCOORD1 }; VertexOutput vert(VertexInput input) { VertexOutput output; output.pos = mul(UNITY_MATRIX_MVP, input.vertex); output.posWorld = mul(_Object2World, input.vertex); output.color = input.color; output.uv = float2(input.uv); return output; } float4 frag(VertexOutput input) : COLOR { float4 diffuseColor = tex2D(_MainTex, input.uv); // Sprites are screen-aligned, so the normal points toward the screen float3 normalDirection = float3(0.0f, 0.0f, -1.0f); // For orthographic cameras, the view direction is always known float3 viewDirection = float3(0.0f, 0.0f, -1.0f); float3 vertexToLightSource = float3(_WorldSpaceLightPos0 - input.posWorld); float distance = length(vertexToLightSource); float attenuation = 1.0 / distance; // Linear attenuation float3 lightDirection = normalize(vertexToLightSource); // Compute diffuse part of lighting float normalDotLight = dot(normalDirection, lightDirection); float3 diffuseReflection = float3(diffuseColor) * input.color * attenuation * float3(_LightColor0) * max(0.0f, normalDotLight); // Compute specular part of lighting float3 specularReflection; if (normalDotLight < 0.0) { // Light source is on the wrong side, so there's no specular reflection specularReflection = float3(0.0, 0.0, 0.0); } else { specularReflection = attenuation * float3(_LightColor0) * float3(_SpecColor) * input.color * pow(max(0.0, dot(reflect(-lightDirection, normalDirection), viewDirection)), _Shininess); } return float4(diffuseReflection + specularReflection, diffuseColor.a); } ENDCG }

On line 4, we specify a blend state for additive blending. By using additive blending, we allow multiple lights to contribute to lighting, instead of each light overwriting the previous one.

Our vertex shader now also outputs “posWorld”, which is the vertex position in world coordinates (unlike output.pos, which is in screen coordinates). We’ll need this for computing light strength. Although it’s not a texture coordinate, we bind it to TEXCOORD1 because we choose how to interpret it in the fragment shader.

Fragment Shader

In order to compute Phong illumination, you need to know the normal for the surface (or else you don’t know how strong a light is hitting a surface). Since we’re working with Unity 2D sprites, the normal is always (0, 0, -1), pointing toward the screen. Additionally, the view direction (which points from the fragment to the camera) is needed to compute specular highlights. Since we have an orthographic camera, this is also a known value.

_LightColor0 is a built-in value provided by Unity, and is pretty self-explanatory. This is also the first time we encounter lighting attentuation, which specifies how light strength descreases over time. This typically follows an inverse quadratic curve, but for now we’re using inverse linear. There’s nothing wrong with either, but inverse linear provides more light.

It’s important to note that this code is intended for point lights only. If you want to use directional lights, then the attenuation and lightDirection are calculated differently. If you want to use spot lights, cookie attenuation needs to be added.

The rest of the new code in this shader is just following the Phong illumination model, so I won’t explain any further.

Normal Maps

Now that we have standard lighting implemented, it’s time to take advantage of SpriteLamp! The first (and most important) aspect to integrate is the normal map. For the head above, our normal map looks like this:

Like with Phong illumination, for normal maps we have to add to our shader properties:

Properties { _MainTex ("Diffuse Texture", 2D) = "white" {} _Normal ("Normal", 2D) = "bump" {} _SpecColor ("Specular Material Color", Color) = (1,1,1,1) _Shininess ("Shininess", Float) = 10 }

For the normal map, “bump” refers to a default texture where the red and green channels (which correspond to the x and y components of the normals) are 128, and the blue channel (the normal’s z component) is 255. The range of values in a normal map is from -1 to 1, so by default a normal map specifies normals of (0, 0, 1). This is why most normal maps appear blue.

As you might expect, the only thing that changes when incorporating normals is the normalDirection variable. This still requires a few changes throughout the shader, though:

// List of external variables pulled into shader: uniform sampler2D _MainTex; uniform sampler2D _Normal; uniform float4 _SpecColor; uniform float _Shininess; uniform float4 _LightColor0; // Computing the normal: float3 normalDirection = (tex2D(_Normal, input.uv).xyz - 0.5f) * 2.0f; normalDirection = float3(mul(float4(normalDirection, 1.0f), _World2Object)); normalDirection.z *= -1; normalDirection = normalize(normalDirection);

There’s a lot going on in those few lines that compute the normal. Since we’re getting the normal from a texture, we have to convert from color coordinates to normal coordinates. Colors range from 0 to 1, while normals range from -1 to 1. This is handled on line 9.

Next, we multiply the normal by the “world to object” matrix. This is necessary because that matrix contains the transform for things such as rotated sprites. Without this line, the lighting wouldn’t change as you rotate a sprite around a light!

As mentioned above, the default normal value is (0, 0, 1). However, as you’ll recall from the Phong illumination shader, we used (0, 0, -1), so we negate the z component. Finally, we normalize the normal.

Depth Maps

Another feature that SpriteLamp provides is depth maps, which adjust the depth of the fragment. For sprites, this translates to adjusting the Z position, but more generally it adjusts the position along the normal that would be computed in the vertex shader. We’re just taking shortcuts because we’re using sprites (and also since shaders have an instruction limit). This depth map generated by SpriteLamp adds some definition to the ear and jawline, and adds rounding to the edges of the head and face:

// Shader properties: Properties { _MainTex ("Diffuse Texture", 2D) = "white" {} _Normal ("Normal Map", 2D) = "bump" {} _Depth ("Depth Map", 2D) = "gray" {} _SpecColor ("Specular Material Color", Color) = (1,1,1,1) _Shininess ("Shininess", Float) = 10 _AmplifyDepth ("Amplify Depth", Float) = 1 } // User-specified properties: uniform sampler2D _MainTex; uniform sampler2D _Normal; uniform sampler2D _Depth; uniform float4 _LightColor0; uniform float4 _SpecColor; uniform float _Shininess; uniform float _AmplifyDepth; // Vertex to light source calculation: float depthColor = (tex2D(_Depth, input.uv).x - 0.5f) * 2.0f; float3 posWorld = float3(input.posWorld); posWorld.z -= depthColor * _AmplifyDepth; float3 vertexToLightSource = float3(_WorldSpaceLightPos0) - posWorld;

As you probably expected, we added the depth map to the shader properties. We’re interpreting the depth map to be able to both add and subtract depth, so the range 0 to 1 for color maps to -1 to 1 for depth. This means that our texture should be “gray” by default, and we do the same computation as we did for normals to convert to the -1 to 1 space.

After getting the depth adjustment, we then subtract it from our posWorld.z, factoring in the “Amplify Depth” setting. We’re subtracting because our camera is looking in the positive Z direction, and the brighter areas of the depth map are “closer”, which means moving in the negative Z direction.

The difference in “amplify depth” settings is very noticeable, but you have to be careful to not increase it so much that the sprite will be “within” a light:

The unlit parts in the center of the head have final depth values that are closer to the camera than the light.

A Future Improvement

Some of you may have noticed that we’re only focusing on the x component for the depth map. And for the normal map, we only focused on the x, y, and z components. While we haven’t implemented it, one improvement that we’re considering (and we hope you do too) is to combine the normal map and depth map, so that the alpha value of the bitmap is the depth value, reducing the number of texture lookups needed.

Cel-Shading

As Finn Morgan pointed out in his recent blog post, adding cel-shading is a pretty simple technique:

// Shader properties: Properties { _MainTex ("Diffuse Texture", 2D) = "white" {} _Normal ("Normal", 2D) = "bump" {} _Depth ("Depth", 2D) = "gray" {} _SpecColor ("Specular Material Color", Color) = (1,1,1,1) _Shininess ("Shininess", Float) = 10 _AmplifyDepth ("Amplify Depth", Float) = 1 _CelShadingLevels ("Cel Shading Levels", Float) = 0 } // User-specified properties: uniform sampler2D _MainTex; uniform sampler2D _Normal; uniform sampler2D _Depth; uniform float4 _SpecColor; uniform float4 _LightColor0; uniform float _Shininess; uniform float _AmplifyDepth; uniform float _CelShadingLevels; // The end of the fragment shader: // Compute diffuse part of lighting float normalDotLight = dot(normalDirection, lightDirection); float diffuseLevel = attenuation * max(0.0f, normalDotLight); // Compute specular part of lighting float specularLevel; if (normalDotLight < 0.0f) { // Light is on the wrong side, no specular reflection specularLevel = 0.0f; } else { // For orthographic cameras, the view direction is always known float3 viewDirection = float3(0.0f, 0.0f, -1.0f); specularLevel = attenuation * pow(max(0.0, dot(reflect(-lightDirection, normalDirection), viewDirection)), _Shininess); } // Add cel-shading if enough levels were specified if (_CelShadingLevels >= 2) { diffuseLevel = floor(diffuseLevel * _CelShadingLevels) / (_CelShadingLevels - 0.5f); specularLevel = floor(specularLevel * _CelShadingLevels) / (_CelShadingLevels - 0.5f); } float3 diffuseReflection = float3(diffuseColor) * input.color * float3(_LightColor0) * diffuseLevel; float3 specularReflection = float3(_LightColor0) * float3(_SpecColor) * input.color * specularLevel; return float4(diffuseReflection + specularReflection, diffuseColor.a);

Note that now we compute the diffuse and specular “level” (before color is factored in), apply cel-shading to that, and then add the color-based computations in. If you just added the cel-shading to the end of the previous shader, you would end up with per-component cel-shading which generally looks terrible:

However, with the changes to apply cel-shading before the color is added, we get a much better result:

The Finished Shader

Putting everything together, the final shader looks like this:

// Shader for Unity integration with SpriteLamp // Written by Steve Karolewics & Indreams Studios Shader "Custom/SpriteLamp" { Properties { _MainTex ("Diffuse Texture", 2D) = "white" {} _Normal ("Normal", 2D) = "bump" {} _Depth ("Depth", 2D) = "gray" {} _SpecColor ("Specular Material Color", Color) = (1,1,1,1) _Shininess ("Shininess", Float) = 10 _AmplifyDepth ("Amplify Depth", Float) = 1 _CelShadingLevels ("Cel Shading Levels", Float) = 0 } SubShader { AlphaTest NotEqual 0.0 Pass { Tags { "LightMode" = "ForwardBase" } CGPROGRAM #pragma vertex vert #pragma fragment frag #include "UnityCG.cginc" // User-specified properties uniform sampler2D _MainTex; struct VertexInput { float4 vertex : POSITION; float4 color : COLOR; float4 uv : TEXCOORD0; }; struct VertexOutput { float4 pos : POSITION; float4 color : COLOR; float2 uv : TEXCOORD0; }; VertexOutput vert(VertexInput input) { VertexOutput output; output.pos = mul(UNITY_MATRIX_MVP, input.vertex); output.color = input.color; output.uv = float2(input.uv); return output; } float4 frag(VertexOutput input) : COLOR { float4 diffuseColor = tex2D(_MainTex, input.uv); float3 ambientLighting = float3(UNITY_LIGHTMODEL_AMBIENT) * float3(diffuseColor) * float3(input.color); return float4(ambientLighting, diffuseColor.a); } ENDCG } Pass { Tags { "LightMode" = "ForwardAdd" } Blend One One // additive blending CGPROGRAM #pragma vertex vert #pragma fragment frag #include "UnityCG.cginc" // User-specified properties uniform sampler2D _MainTex; uniform sampler2D _Normal; uniform sampler2D _Depth; uniform float4 _SpecColor; uniform float4 _LightColor0; uniform float _Shininess; uniform float _AmplifyDepth; uniform float _CelShadingLevels; struct VertexInput { float4 vertex : POSITION; float4 color : COLOR; float4 uv : TEXCOORD0; }; struct VertexOutput { float4 pos : POSITION; float4 color : COLOR; float2 uv : TEXCOORD0; float4 posWorld : TEXCOORD1; }; VertexOutput vert(VertexInput input) { VertexOutput output; output.pos = mul(UNITY_MATRIX_MVP, input.vertex); output.posWorld = mul(_Object2World, input.vertex); output.uv = float2(input.uv); output.color = input.color; return output; } float4 frag(VertexOutput input) : COLOR { float4 diffuseColor = tex2D(_MainTex, input.uv); // To compute the correct normal: // 1) Get the pixel value from the normal map // 2) Subtract 0.5 and multiply by 2 to convert from the range 0...1 to -1...1 // 3) Multiply by world to object matrix, to handle rotation, etc // 4) Negate Z so that lighting works as expected (sprites further away from the camera than // a light are lit, etc.) // 5) Normalize float3 normalDirection = (tex2D(_Normal, input.uv).xyz - 0.5f) * 2.0f; normalDirection = float3(mul(float4(normalDirection, 1.0f), _World2Object)); normalDirection.z *= -1; normalDirection = normalize(normalDirection); // To adjust depth: // 1) Get the depth value from the depth map // 2) Subtract 0.5 and multiply by 2 to convert from the range 0...1 to -1...1 // 3) Multiply by the amplify depth value, and subtract from the fragment's z position float depthColor = (tex2D(_Depth, input.uv).x - 0.5f) * 2.0f; float3 posWorld = float3(input.posWorld); posWorld.z -= depthColor * _AmplifyDepth; float3 vertexToLightSource = float3(_WorldSpaceLightPos0) - posWorld; float distance = length(vertexToLightSource); // The values for attenuation and lightDirection are assuming point lights float attenuation = 1.0 / distance; // Linear attenuation is good enough for now float3 lightDirection = normalize(vertexToLightSource); // Compute diffuse part of lighting float normalDotLight = dot(normalDirection, lightDirection); float diffuseLevel = attenuation * max(0.0f, normalDotLight); // Compute specular part of lighting float specularLevel; if (normalDotLight < 0.0f) { // Light is on the wrong side, no specular reflection specularLevel = 0.0f; } else { // For orthographic cameras, the view direction is always known float3 viewDirection = float3(0.0f, 0.0f, -1.0f); specularLevel = attenuation * pow(max(0.0, dot(reflect(-lightDirection, normalDirection), viewDirection)), _Shininess); } // Add cel-shading if enough levels were specified if (_CelShadingLevels >= 2) { diffuseLevel = floor(diffuseLevel * _CelShadingLevels) / (_CelShadingLevels - 0.5f); specularLevel = floor(specularLevel * _CelShadingLevels) / (_CelShadingLevels - 0.5f); } float3 diffuseReflection = float3(diffuseColor) * input.color * float3(_LightColor0) * diffuseLevel; float3 specularReflection = float3(_LightColor0) * float3(_SpecColor) * input.color * specularLevel; return float4(diffuseReflection + specularReflection, diffuseColor.a); } ENDCG } } // The definition of a fallback shader should be commented out // during development: // Fallback "Transparent/Diffuse" }

You can also download it directly here: http://indreams-studios.com/SpriteLamp.shader

Conclusion

There are still plenty of features that we haven’t implemented for our shader, such as ambient occlusion, anisotropy maps, self-shadowing, and wraparound lighting. We’re content with where we are at this point though, since we can continue making progress on our game visually, and there’s always time to improve the shader later. If you end up using (or improving) this shader, please let us know! We’d love to learn that we’re helping out other developers.

Edit: By request, we added the MIT license to the final shader file, so feel free to use it unrestricted!