This is the first installment of a tutorial series covering Unity's scriptable render pipeline. This tutorial assumes you've gone through the Basics series first, and the Procedural Grid tutorial. The first few parts of the Rendering series are also useful.

This tutorial is made with Unity 2018.3.0f2.

I have another tutorial series covering the scriptable render pipeline. It is made for Unity 2019 and later, while this one uses the experimental SRP API which only works with Unity 2018. The other series takes a different and more modern approach but will cover at lot of the same topics. It's still useful to work through this series if you don't want to wait until the new one has caught up with it.

Now we can return a new instance of MyPipeline in InternalCreatePipeline . This means that we technically have a valid pipeline, although it still doesn't render anything.

Although we can implement IRenderPipeline ourselves, it is more convenient to extend the abstract RenderPipeline class instead. That type already provides a basic implementation of IRenderPipeline that we can build on.

To create a valid pipeline, we have to provide an object instance that implements IRenderPipeline and is responsible for the rendering process. So create a class for that, naming it MyPipeline .

We have now replaced the default pipeline, which changes a few things. First, a lot of options have disappeared from the graphics settings, which Unity also mentions in an info panel. Second, as we've bypassed the default pipeline without providing a valid replacement, nothing gets rendered anymore. The game window, scene window, and material previews are no longer functional, although the scene window still shows the skybox. If you open the frame debugger—via Window / Analysis / Frame Debugger—and enable it, you will see that indeed nothing gets drawn in the game window.

Use the new menu item to add the asset to the project, naming it My Pipeline.

That puts an entry in the Asset / Create menu. Let's be tidy and put it in a Rendering submenu. We do this by setting the menuName property of the attribute to Rendering/My Pipeline. The property can be set directly after the attribute type, within round brackets.

Now we need to add an asset of this type to our project. To make that possible, add a CreateAssetMenu attribute to MyPipelineAsset .

Because interfaces do not contain concrete implementations, it is possible for classes and even structs to extend more than one interface. If multiple interfaces happen to defined the same thing, they just agree that the functionality should be there. This is not possible with classes—even when abstract—because that could lead to conflicting implementations.

An interface is like a class, except that it defines a functionality contract without providing an implementation of it. It only defines properties, events, indexers, and method signatures, which are all public by definition. Any type that extends an interface is required to contain implementations of what the interface defines. The convention is to prefix interface types names with an I.

The return type of InternalCreatePipeline is IRenderPipeline . The I prefix of the type name indicates that it is an interface type.

The main purpose of the pipeline asset is to give Unity a way to get a hold of a pipeline object instance that is responsible for rendering. The asset itself is just a handle and a place to store pipeline settings. We don't have any settings yet, so all we have to do is give Unity a way to get our pipeline object instance. This is done by overriding the InternalCreatePipeline method. But we haven't defined our pipeline object type yet, so at this point we'll just return null .

It will move out of the experimental namespace at some point, either to UnityEngine.Rendering or to another namespace. When that happens, it's just a matter of updating the using statement, unless the API also gets changed.

Create a new script for our custom pipeline asset. We'll simply name our pipeline My Pipeline. Its asset type will thus be MyPipelineAsset and it has to extend RenderPipelineAsset , which is defined in the UnityEngine.Experimental.Rendering namespace.

To setup our own pipeline, we have to assign a pipeline asset to the Scriptable Render Pipeline Settings field. Such assets have to extend RenderPipelineAsset , which is a ScriptableObject type.

Currently, Unity uses the default forward rendering pipeline. To use a custom pipeline, we have to select one in the graphics settings, which can be found via Edit / Project Settings / Graphics.

Fill the scene with a few objects, making use of all four materials.

We'll need a few simple materials to test our pipeline. I've created four materials. First, a default standard opaque material with a red albedo. Second, the same material but with Rendering Mode set to Transparent and a blue albedo with decreased alpha. Third, a material using the Unlit/Color shader with its color set to yellow. And finally a material using the Unlit/Transparent shader without any changes, so it appears solid white.

We're going to work in linear color space, but Unity 2018 still uses gamma space as the default. So go to the player settings via Edit / Project Settings / Player and switch Color Space in the Other Settings section to Linear.

Once the project is open, go the package manager via Window / Package Manager and remove all the packages that were included by default, as we won't need them. Only keep the Package Manager UI, which cannot be removed.

Open Unity 2018 and create a new project. I'm using Unity 2018.2.9f1, but any 2018.2 version or higher should also work. Create a standard 3D project, with analytics disabled. We'll create our own pipeline, so don't select one of the pipeline options.

In this tutorial we will setup a minimal render pipeline that draws unlit shapes. Once that's working, we can extend our pipeline in later tutorials, adding lighting, shadows, and more advanced features.

Unity 2018 added support for scriptable render pipelines, making it possible to design pipelines from scratch, though you still have to rely on Unity for many individual steps, like culling. Unity 2018 introduced two new pipelines made with this new approach, the lightweight pipeline and the high-definition pipeline. Both pipelines are still in the preview stage and the scriptable render pipeline API is still marked as experimental technology. But at this point it is stable enough for us to go ahead and create our own pipeline.

Unity 2017 supports two predefined render pipelines, one for forward rendering and one for deferred rendering. It also still supports an older deferred rendering approach introduced in Unity 5. These pipelines are fixed. You are able to enable, disable, or override certain parts of the pipelines, but it's not possible to drastically deviate from their design.

To render anything, Unity has to determine what shapes have to be drawn, where, when, and with what settings. This can get very complex, depending on how many effects are involved. Lights, shadows, transparency, image effects, volumetric effects, and so on all have to be dealt with in the correct order to arrive at the final image. This process is known as a render pipeline.

Rendering

The pipeline object takes care of rendering each frame. All Unity does is invoke the pipeline's Render method with a context and the cameras that are active. This is done for the game window, but also for the scene window and material previews in the editor. It is up to us to configure things appropriately, figure out what needs to be rendered, and do everything in the correct order.

Context RenderPipeline contains an implementation of the Render method defined in the IRenderPipeline interface. Its first argument is the render context, which is a ScriptableRenderContext struct, acting as a facade for native code. Its second argument is an array containing all cameras that need to be rendered. RenderPipeline.Render doesn't draw anything, but checks whether the pipeline object is valid to use for rendering. If not, it will raise an exception. We will override this method and invoke the base implementation, to keep this check. public class MyPipeline : RenderPipeline { public override void Render ( ScriptableRenderContext renderContext, Camera[] cameras ) { base.Render(renderContext, cameras); } } It is through the render context that we issue commands to the Unity engine to render things and control render state. One of the simplest examples is drawing the skybox, which can be done by invoking the DrawSkyBox method. base.Render(renderContext, cameras); renderContext.DrawSkybox(); DrawSkybox requires a camera as a argument. We'll simply use the first element of cameras . renderContext.DrawSkybox( cameras[0] ); We still don't see the skybox appear in the game window. That's because the commands that we issue to the context are buffered. The actual works happens after we submit it for execution, via the Submit method. renderContext.DrawSkybox(cameras[0]); renderContext.Submit(); The skybox finally appears in the game window, and you can also see it appear in the frame debugger. Frame debugger showing skybox gets drawn.

Cameras We are supplied with an array of cameras, because there can exist multiple in the scene that all have to be rendered. Example uses for multiple-camera setups are split-screen multiplayer, mini maps, and rear-view mirrors. Each camera needs to be handled separately. We won't worry about multi-camera support for our pipeline at this point. We'll simply create an alternative Render method that acts on a single camera. Have it draw the skybox and then submit. So we submit per camera. void Render (ScriptableRenderContext context, Camera camera) { context.DrawSkybox(camera); context.Submit(); } Invoke the new method for each element of the cameras array. I use a foreach loop in this case, as Unity's pipelines also use this approach to loop through the cameras. public override void Render ( ScriptableRenderContext renderContext, Camera[] cameras ) { base.Render(renderContext, cameras); //renderContext.DrawSkybox(cameras[0]); //renderContext.Submit(); foreach (var camera in cameras) { Render(renderContext, camera); } } How does foreach work? foreach (var e in a) { … } works like for (int i = 0; i < a.Length; a++) { var e = a[i]; … } assuming that a is an array. The only functional difference is that we do not have access to the iterator variable i . When a isn't an array but something else that is enumerable, then iterators come into play and you might end up with temporary object creation, which is best avoided. But using foreach with arrays is safe. The use of var to define the element variable is common, so I use it as well. Its type is the element type of a . Note that the orientation of the camera currently doesn't affect how the skybox gets rendered. We pass the camera to DrawSkybox , but that's only used to determine whether the skybox should be drawn at all, which is controlled via the camera clear flags. To correctly render the skybox—and the entire scene—we have to setup the view-projection matrix. This transformation matrix combines the camera's position and orientation—the view matrix— with the camera's perspective or orthographic projection—the projection matrix. You can see this matrix in the frame debugger. It is unity_MatrixVP, one of the shader properties used when something is drawn. At the moment, the unity_MatrixVP matrix is always the same. We have to apply the camera's properties to the context, via the SetupCameraProperties method. That sets up the matrix as well as some other properties. void Render (ScriptableRenderContext context, Camera camera) { context.SetupCameraProperties(camera); context.DrawSkybox(camera); context.Submit(); } Now the skybox gets rendered correctly, taking the camera properties into account, both in the game window and in the scene window.

Command Buffers The context delays the actual rendering until we submit it. Before that, we configure it and add commands to it for later execution. Some tasks—like drawing the skybox—can be issued via a dedicated method, but other commands have to be issued indirectly, via a separate command buffer. A command buffer can be created by instantiating a new CommandBuffer object, which is defined in the UnityEngine.Rendering namespace. Command buffers already existed before the scriptable rendering pipeline was added, so they aren't experimental. Create such a buffer before we draw the skybox. using UnityEngine; using UnityEngine.Rendering; using UnityEngine.Experimental.Rendering; public class MyPipeline : RenderPipeline { … void Render (ScriptableRenderContext context, Camera camera) { context.SetupCameraProperties(camera); var buffer = new CommandBuffer(); context.DrawSkybox(camera); context.Submit(); } } We can instruct the context to execute the buffer via its ExecuteCommandBuffer method. Once again, this doesn't immediately execute the commands, but copies them to the internal buffer of the context. var buffer = new CommandBuffer(); context.ExecuteCommandBuffer(buffer); Command buffers claim resources to store their commands at the native level of the Unity engine. If we no longer need these resources, it is best to release them immediately. This can be done by invoking the buffer's Release method, directly after invoking ExecuteCommandBuffer . var buffer = new CommandBuffer(); context.ExecuteCommandBuffer(buffer); buffer.Release(); Executing an empty command buffer does nothing. We added it so that we can clear the render target, to make sure that rendering isn't influenced by what was drawn earlier. This is possible via a command buffer, but not directly via the context. A clear command can be added to the buffer by invoking ClearRenderTarget . It requires three arguments: two booleans and a color. The first argument controls whether the depth information is cleared, the second whether the color is cleared, and the third is the clear color, if used. For example, let's clear the depth data, ignore color data, and use Color.clear as the clear color. var buffer = new CommandBuffer(); buffer.ClearRenderTarget(true, false, Color.clear); context.ExecuteCommandBuffer(buffer); buffer.Release(); The frame debugger will now show us that a command buffer get executed, which clears the render target. In this case, it indicated that Z and stencil get cleared. Z refers to the depth buffer, and the stencil buffer always gets cleared. Clearing the depth and stencil buffers. What gets cleared is configured per camera, via its clear flags and background color. We can use those instead of hard-coding how we clear the render target. CameraClearFlags clearFlags = camera.clearFlags; buffer.ClearRenderTarget( (clearFlags & CameraClearFlags.Depth) != 0 , (clearFlags & CameraClearFlags.Color) != 0 , camera.backgroundColor ); How do the clear flags work? CameraClearFlags is an enumeration that can be used as a set of bit flags. Each bit of the value is used to indicate whether a certain feature is enabled or not. To extract a bit flag from the entire value, combine the value with the desired flag using the bitwise AND operator & . If the result is not zero, then the flag is set. Because we haven't given the command buffer a name, the debugger displays the default name, which is Unnamed command buffer. Let's use the camera's name instead, by assigning it to the buffer's name property. We'll use object initializer syntax to do this. var buffer = new CommandBuffer { name = camera.name } ; Using camera name for the command buffer. How does object initializer syntax work? We could've also written buffer.name = camera.name; as a separate statement after invoking the constructor. But when creating a new object, you can append a code block to the constructor's invocation. Then you can set the object's fields and properties in the block, without having to reference the object instance explicitly. Also, it makes explicit that the instances should only be used after those fields and properties have been set. Besides that, it makes initialization possible where only a single statement is allowed, without requiring constructors with many parameter variants. Note that we omitted the empty parameter list of the constructor invocation, which is allowed when object initializer syntax is used.

Culling We're able to render the skybox, but not yet any of the objects that we put in the scene. Rather than rendering every object, we're only going to render those that the camera can see. We do that by starting with all renderers in the scene and then culling those that fall outside of the view frustum of the camera. What are renderers? It are components attached to game objects that turn them into something that can be rendered. Typically, a MeshRenderer component. Figuring out what can be culled requires us to keep track of multiple camera settings and matrices, for which we can use the ScriptableCullingParameters struct. Instead of filling it ourselves, we can delegate that work to the static CullResults.GetCullingParameters method. It takes a camera as input and produces the culling parameters as output. However, it doesn't return the parameters struct. Instead, we have to supply it as a second output parameter, writing out in front of it. void Render (ScriptableRenderContext context, Camera camera) { ScriptableCullingParameters cullingParameters; CullResults.GetCullingParameters(camera, out cullingParameters); … } Why do we have to write out ? Structs are value types, so they're treated like simple values. They aren't objects with an identity, with variables and fields only holding references to their location in memory. So passing the struct as an argument provides a method with a copy of that value. The method can change the copy, but that has no effect on the value that was copied. When a struct parameter is defined as an output parameter, it acts like an object reference, but pointing to the place on the memory stack where the argument resides. When the method changes that argument, it affects that value, not a copy. The out keyword tells us that the method is responsible for correctly setting the parameter, replacing the previous value. Besides the output parameter, GetCullingParameters also returns whether it was able to create valid parameters. Not all camera settings are valid, resulting in degenerate results that cannot be used for culling. So if it fails, we have nothing to render and can exit from Render . if (! CullResults.GetCullingParameters(camera, out cullingParameters) ) { return; } Once we have the culling parameters, we can use them to cull. This is done by invoking the static CullResults.Cull method with both the culling parameters and the context as arguments. The result is a CullResults struct, which contains information about what is visible. In this case, we have to supply the culling parameters as a reference parameter, by writing ref in front of it. if (!CullResults.GetCullingParameters(camera, out cullingParameters)) { return; } CullResults cull = CullResults.Cull(ref cullingParameters, context); Why do we have to write ref ? It works just like out , except in this case the method is not required to assign something to the value. And whoever invokes the method is responsible for properly initializing the value first. So it can be used for input, and optionally for output. Why is ScriptableCullingParameters a struct? It's probably an optimization attempt, the idea being that you can create multiple parameter structs without having to worry about memory allocations. However, ScriptableCullingParameters is very large for a struct, which is why a reference parameter is used here, again for performance reasons. Maybe it started small but grew into a huge struct over time. Reusable object instances might be a better approach now, but we have to work with whatever Unity Technologies decides to use.