This is the second part of a tutorial series about rendering. The first part was about matrices. This time we'll write our first shader and import a texture.

This tutorials was made using Unity 5.4.0b10.

To further simplify the rendering, deactivate the directional light object, or delete it. This will get rid of the direct lighting in the scene, as well as the shadows that would be cast by it. What's left is the solid background, with the silhouette of the sphere in the ambient color.

No idea why that's the default, really. But it doesn't matter. This color replaces the previous image completely. It doesn't mix.

The background color is defined per camera. It renders the skybox by default, but it too falls back to a solid color.

As you might expect, the sphere has become darker and the background is now a solid color. However, the background is dark blue. Where does that color come from?

Without a skybox, the ambient source automatically switches to a solid color. The default color is dark gray with a very slight blue tint. Reflections become solid black, as indicated by a warning box.

While you're at it, you can also switch off the precomputed and real-time global illumination panels. We're not going to use those anytime soon.

There is a section about environmental lighting, where you can select a skybox. This skybox is currently used for the scene background, for ambient lighting, and for reflections. Set it to none so it is switched off.

Have a look at the lighting settings for the scene, via Window / Lighting . This will summon a lighting window with three tabs. We're only interested in the Scene tab, which is active by default.

This is a very simple scene, yet there is already a lot of complex rendering going on. To get a good grip on the rendering process, it helps to get rid of all the fancy stuff and fist concern us with the fundamentals only.

When you create a new scene in Unity, you start with a default camera and directional light. Create a simple sphere via GameObject / 3D Object / Sphere , put it at the origin, and place the camera just in front of it.

From Object to Image

Our very simple scene is drawn in two steps. First, the image is filled with the background color of the camera. Then our sphere's silhouette is drawn on top of that.

How does Unity know that it has to draw a sphere? We have a sphere object, and this object has a mesh renderer component. If this object lies inside the camera's view, it should be rendered. Unity verifies this by checking whether the object's bounding box intersects the camera's view frustum.

What's a bounding box? Take any mesh. Now find the smallest box in which that mesh would fit. That's a bounding box. It is automatically derived from the mesh of an object. You can consider bounding boxes simple approximations of the volume occupied by a mesh. If you cannot see the box, you certainly cannot see the mesh.

Default sphere object.

The transform component is used to alter the position, orientation, and size of the mesh and bounding box. Actually, the entire transformation hierarchy is used, as described in part 1, Matrices. If the object ends up in the camera's view, it is scheduled for rendering.

Finally, the GPU is tasked with rendering the object's mesh. The specific rendering instructions are defined by the object's material. The material references a shader – which is a GPU program – plus any settings it might have.

Who controls what.

Our object currently has the default material, which uses Unity's Standard shader. We're going to replace it with our own shader, which we'll build from the ground up.

Your First Shader Create a new shader via Assets / Create / Shader / Unlit Shader and name it something like My First Shader. Your first shader. Open the shader file and delete its contents, so we can start from scratch. A shader is defined with the Shader keyword. It is followed by a string that describes the shader menu item that you can use to select this shader. It doesn't need to match the file name. After that comes the block with the shader's contents. Shader "Custom/My First Shader" { } Save the file. You will get a warning that the shader is not supported, because it has no sub-shaders or fallbacks. That's because it's empty. Although the shader is nonfunctional, we can already assign it to a material. So create a new material via Assets / Create / Material and select our shader from the shader menu. Material with your shader. Change our sphere object so it uses our own material, instead of the default material. The sphere will become magenta. This happens because Unity will switch to an error shader, which uses this color to draw your attention to the problem. Material with your shader. The shader error mentioned sub-shaders. You can use these to group multiple shader variants together. This allows you to provide different sub-shaders for different build platforms or levels of detail. For example, you could have one sub-shader for desktops and another for mobiles. We need just one sub-shader block. Shader "Custom/My First Shader" { SubShader { } } The sub-shader has to contain at least one pass. A shader pass is where an object actually gets rendered. We'll use one pass, but it's possible to have more. Having more than one pass means that the object gets rendered multiple times, which is required for a lot of effects. Shader "Custom/My First Shader" { SubShader { Pass { } } } Our sphere might now become white, as we're using the default behavior of an empty pass. If that happens, it means that we no longer have any shader errors. However, you might still see old errors in the console. They tend to stick around, not getting cleared when a shader recompiles without errors. A white sphere.

Shader Programs It is now time to write our own shader program. We do so with Unity's shading language, which is a variant of the HLSL and CG shading languages. We have to indicate the start of our code with the CGPROGRAM keyword. And we have to terminate with the ENDCG keyword. Pass { CGPROGRAM ENDCG } Why are those keywords needed? Shader passes can contain other statements besides the shader program. So the program has to be separated somehow. Why not use another block for that? No idea. You will encounter more oddities like this. They are often old design decisions that made sense once, but no longer. Due to backwards compatibility, we're still stuck with them. The shader compiler is now complaining that our shader doesn't have vertex and fragment programs. Shaders consist of two programs each. The vertex program is responsible for processing the vertex data of a mesh. This includes the conversion from object space to display space, just like we did in part 1, Matrices. The fragment program is responsible for coloring individual pixels that lie inside the mesh's triangles. Vertex and fragment program. We have to tell the compiler which programs to use, via pragma directives. CGPROGRAM #pragma vertex MyVertexProgram #pragma fragment MyFragmentProgram ENDCG What's a pragma? The word pragma comes from Greek and refers to an action, or something that needs to be done. It's used in many programming languages to issue special compiler directives. The compiler again complains, this time because it cannot find the programs that we specified. That's because we haven't defined them yet. The vertex and fragment programs are written as methods, quite like in C#, though they're typically referred to as functions. Let's simply create two empty void methods with the appropriate names. CGPROGRAM #pragma vertex MyVertexProgram #pragma fragment MyFragmentProgram void MyVertexProgram () { } void MyFragmentProgram () { } ENDCG At this point the shader will compile, and the sphere will disappear. Or you will still get errors. It depends on which rendering platform your editor is using. If you're using Direct3D 9, you'll probably get errors.

Shader Compilation Unity's shader compiler takes our code and transforms it into a different program, depending on the target platform. Different platforms require different solutions. For example, Direct3D for Windows, OpenGL for Macs, OpenGL ES for mobiles, and so on. We're not dealing with a single compiler here, but multiple. Which compiler you end up using depends on what you're targeting. And as these compilers are not identical, you can end up with different results per platform. For example, our empty programs work fine with OpenGL and Direct3D 11, but fail when targeting Direct3D 9. Select the shader in the editor and look at the inspector window. It displays some information about the shader, including the current compiler errors. There is also a Compiled code entry with a Compile and show code button and a dropdown menu. If you click the button, Unity will compile the shader and open its output in your editor, so you can inspect the generated code. Shader inspector, with errors for all platforms. You can select which platforms you manually compile the shader for, via the dropdown menu. The default is to compile for the graphics device that's used by your editor. You can manually compile for other platforms as well, either your current build platform, all platforms you have licenses for, or a custom selection. This enables you to quickly make sure that your shader compiles on multiple platforms, without having to make complete builds. Selecting OpenGLCore. To compile the selected programs, close the pop-up and click the Compile and show code button. Clicking the little Show button inside the pop-up will show you the used shader variants, which is not useful right now. For example, here is the resulting code when our shader is compiled for OpenGlCore. // Compiled shader for custom platforms, uncompressed size: 0.5KB // Skipping shader variants that would not be included into build of current scene. Shader "Custom/My First Shader" { SubShader { Pass { GpuProgramID 16807 Program "vp" { SubProgram "glcore " { "#ifdef VERTEX #version 150 #extension GL_ARB_explicit_attrib_location : require #extension GL_ARB_shader_bit_encoding : enable void main() { return; } #endif #ifdef FRAGMENT #version 150 #extension GL_ARB_explicit_attrib_location : require #extension GL_ARB_shader_bit_encoding : enable void main() { return; } #endif " } } Program "fp" { SubProgram "glcore " { "// shader disassembly not supported on glcore" } } } } } The generated code is split into two blocks, vp and fp, for the vertex and fragment programs. However, in the case of OpenGL both programs end up in the vp block. The two main functions correspond two our empty methods. So let's focus on those and ignore the other code. #ifdef VERTEX void main() { return; } #endif #ifdef FRAGMENT void main() { return; } #endif And here is the generated code for Direct3D 11, stripped down to the interesting parts. It looks quite different, but it's obvious that the code doesn't do much. Program "vp" { SubProgram "d3d11 " { vs_4_0 0: ret } } Program "fp" { SubProgram "d3d11 " { ps_4_0 0: ret } } As we work on our programs, I will often show the compiled code for OpenGLCore and D3D11, so you can get an idea of what's happening under the hood.

Including Other Files To produce a functional shader you need a lot of boilerplate code. Code that defines common variables, functions, and other things. Were this a C# program, we'd put that code in other classes. But shaders don't have classes. They're just one big file with all the code, without the grouping provided by classes or namespaces. Fortunately, we can split the code into multiple files. You can use the #include directive to load a different file's contents into the current file. A typical file to include is UnityCG.cginc, so let's do that. CGPROGRAM #pragma vertex MyVertexProgram #pragma fragment MyFragmentProgram #include "UnityCG.cginc" void MyVertexProgram () { } void MyFragmentProgram () { } ENDCG UnityCG.cginc is one of the shader include files that are bundled with Unity. It includes a few other essential files, and contains some generic functionality. Include file hierarchy, starting at UnityCG. UnityShaderVariables.cginc defines a whole bunch of shader variables that are necessary for rendering, like transformation, camera, and light data. These are all set by Unity when needed. HLSLSupport.cginc sets things up so you can use the same code no matter which platform you're targeting. So you don't need to worry about using platform-specific data types and such. UnityInstancing.cginc is specifically for instancing support, which is a specific rendering technique to reduce draw calls. Although it doesn't include the file directly, it depends on UnityShaderVariables. Note that the contents of these files are effectively copied into your own file, replacing the including directive. This happens during a pre-processing step, which carries out all the pre-processing directives. Those directives are all statements that start with a hash, like #include and #pragma . After that step is finished, the code is processed again, and it is actually compiled. What happens when you include a file more than once? Its contents get copied into your code more than once. You typically don't want to do this, as you'll likely get compiler errors due to duplicate definitions. There is an include file programming convention which guards against redefinitions. We'll use it when we'll write our own include files. But that's for a future tutorial.

Producing Output To render something, our shader programs have to produce results. The vertex program has to return the final coordinates of a vertex. How many coordinates? Four, because we're using 4 by 4 transformation matrices, as described in part 1, Matrices. Change the function's type from void to float4 . A float4 is simply a collection of four floating-point numbers. Just return 0 for now. float4 MyVertexProgram () { return 0; } Is 0 a valid value to return? When using a single value like this, the compiled will repeat it for all float components. You can also be explicit and return float4(0, 0, 0, 0) if you like. We're now getting an error about missing semantics. The compiler sees that we're returning a collection of four floats, but it doesn't know what that data represents. So it doesn't know what the GPU should do with it. We have to be very specific about the output of our program. In this case, we're trying to output the position of the vertex. We have to indicate this by attaching the SV_POSITION semantic to our method. SV stands for system value, and POSITION for the final vertex position. float4 MyVertexProgram () : SV_POSITION { return 0; } The fragment program is supposed to output an RGBA color value for one pixel. We can use a float4 for that as well. Returning 0 will produce solid back. float4 MyFragmentProgram () { return 0; } Wouldn't 0 alpha be fully transparent? It would be, except that our shader actually ignores the alpha channel. We're working with an opaque shader right now. If we were writing a shader with support for transparency, you'd be right. We'll do that in a future tutorial. The fragment program requires semantics as well. In this case, we have to indicate where the final color should be written to. We use SV_TARGET , which is the default shader target. This is the frame buffer, which contains the image that we are generating. float4 MyFragmentProgram () : SV_TARGET { return 0; } But wait, the output of the vertex program is used as input for the fragment program. This suggests that the fragment program should get a parameter that matches the vertex program's output. float4 MyFragmentProgram ( float4 position ) : SV_TARGET { return 0; } It doesn't matter what name we give to the parameter, but we have to make sure to use the correct semantic. float4 MyFragmentProgram ( float4 position : SV_POSITION ) : SV_TARGET { return 0; } Can we omit the position parameter? As we're not using it, we might as well leave it out. However, this confuses some shader compilers, when multiple parameters are involved. So it is best to exactly match the fragment program input with the vertex program output. Our shader once again compiles without errors, but there sphere has disappeared. This shouldn't be surprising, because we collapse all its vertices to a single point. If you look at the compiled OpenGLCore programs, you'll see that they now write to output values. And our single values have indeed been replaced with four-component vectors. #ifdef VERTEX void main() { gl_Position = vec4(0.0, 0.0, 0.0, 0.0); return; } #endif #ifdef FRAGMENT layout(location = 0) out vec4 SV_TARGET0; void main() { SV_TARGET0 = vec4(0.0, 0.0, 0.0, 0.0); return; } #endif The same is true for the D3D11 programs, although the syntax is different. Program "vp" { SubProgram "d3d11 " { vs_4_0 dcl_output_siv o0.xyzw, position 0: mov o0.xyzw, l(0,0,0,0) 1: ret } } Program "fp" { SubProgram "d3d11 " { ps_4_0 dcl_output o0.xyzw 0: mov o0.xyzw, l(0,0,0,0) 1: ret } }