This is the first part of a tutorial series about creating a custom scriptable render pipeline. It covers the initial creation of a bare-bones render pipeline that we will expand in the future.

This series assumes that you've worked through at least the Object Management series and the Procedural Grid tutorial.

This tutorial is made with Unity 2019.2.6f1.

I have another tutorial series covering the scriptable render pipeline, but that one uses the experimental SRP API which only works with Unity 2018. This series is for Unity 2019 and later. This series takes a different and more modern approach but will cover at lot of the same topics. It's still useful to work through the 2018 series if you don't want to wait until this one has caught up with it.

Make CustomRenderPipelineAsset.CreatePipeline return a new instance of CustomRenderPipeline . That will get us a valid and functional pipeline, although it doesn't render anything yet.

RenderPipeline defines a protected abstract Render method that we have to override to create a concrete pipeline. It has two parameters: a ScriptableRenderContext and a Camera array. Leave the method empty for now.

Create a CustomRenderPipeline class and put its script file in the same folder as CustomRenderPipelineAsset . This will be the type used for the RP instance that our asset returns, thus it must extend RenderPipeline .

Replacing the default RP changed a few things. First, a lot of options have disappeared from the graphics settings, which is mentioned in an info panel. Second, we've disabled the default RP without providing a valid replacement, so nothing gets rendered anymore. The game window, scene window, and material previews are no longer functional. If you open the frame debugger—via Window / Analysis / Frame Debugger—and enable it, you will see that indeed nothing gets drawn in the game window.

Use the new menu item to add the asset to the project, then go to the Graphics project settings and select it under Scriptable Render Pipeline Settings.

That puts an entry in the Asset / Create menu. Let's be tidy and put it in a Rendering submenu. We do that by setting the menuName property of the attribute to Rendering/Custom Render Pipeline. This property can be set directly after the attribute type, within round brackets.

Now we need to add an asset of this type to our project. To make that possible, add a CreateAssetMenu attribute to CustomRenderPipelineAsset .

The CreatePipeline method is defined with the protected access modifier, which means that only the class that defined the method—which is RenderPipelineAsset —and those that extend it can access it.

The main purpose of the RP asset is to give Unity a way to get a hold of a pipeline object instance that is responsible for rendering. The asset itself is just a handle and a place to store settings. We don't have any settings yet, so all we have to do is give Unity a way to get our pipeline object instance. That's done by overriding the abstract CreatePipeline method, which should return a RenderPipeline instance. But we haven't defined a custom RP type yet, so begin by returning null .

Currently, Unity uses the default render pipeline. To replace it with a custom render pipeline we first have to create an asset type for it. We'll use roughly the same folder structure that Unity uses for the Universal RP. Create a Custom RP asset folder with a Runtime child folder. Put a new C# script in there for the CustomRenderPipelineAsset type.

I put a few cubes in my test scene, all of which are opaque. The red ones use a material with the Standard shader while the green and yellow ones use a material with the Unlit/Color shader. The blue spheres use the Standard shader with Rendering Mode set to Transparent, while the white spheres use the Unlit/Transparent shader.

Fill the default scene with a few objects, using a mix of standard, unlit opaque and transparent materials. The Unlit/Transparent shader only works with a texture, so here is a UV sphere map for that.

We're going to exclusively work in linear color space, but Unity 2019.2 still uses gamma space as the default. Go to the player settings via Edit / Project Settings and then Player, then switch Color Space under the Other Settings section to Linear.

Create a new 3D project in Unity 2019.2.6 or later. We'll create our own pipeline, so don't select one of the RP project templates. Once the project is open you can go to the package manager and remove all packages that you don't need. We'll only use the Unity UI package in this tutorial to experiment with drawing the UI, so you can keep that one.

This tutorial lays the foundation with a minimal RP that draws unlit shapes using forward rendering. Once that's working, we can extend our pipeline in later tutorials, adding lighting, shadows, different rendering methods, and more advanced features.

The Universal RP is destined to replace the current legacy RP as the default. The idea is that it is a one-size-fits-most RP that will also be fairly easy to customize. Rather than customizing that RP this series will create an entire RP from scratch.

In the past Unity only supported a few built-in ways to render things. Unity 2018 introduced scriptable render pipelines—RPs for short—making it possible to do whatever we want, while still being able to rely on Unity for fundamental steps like culling. Unity 2018 also added two experimental RPs made with this new approach: the Lightweight RP and the High Definition RP. In Unity 2019 the Lightweight RP is no longer experimental and got rebranded to the Universal RP in Unity 2019.3.

To render anything, Unity has to determine what shapes have to be drawn, where, when, and with what settings. This can get very complex, depending on how many effects are involved. Lights, shadows, transparency, image effects, volumetric effects, and so on all have to be dealt with in the correct order to arrive at the final image. This is what a render pipeline does.

Rendering

Each frame Unity invokes Render on the RP instance. It passes along a context struct that provides a connection to the native engine, which we can use for rendering. It also passes an array of cameras, as there can be multiple active cameras in the scene. It is the RP's responsibility to render all those cameras in the order that they are provided.

Camera Renderer Each camera gets rendered independently. So rather than have CustomRenderPipeline render all camera's we'll forward that responsibility to a new class dedicated to rendering one camera. Name it CameraRenderer and give it a public Render method with a context and a camera parameter. Let's store these parameters in fields for convenience. using UnityEngine; using UnityEngine.Rendering; public class CameraRenderer { ScriptableRenderContext context; Camera camera; public void Render (ScriptableRenderContext context, Camera camera) { this.context = context; this.camera = camera; } } Have CustomRenderPipeline create an instance of the renderer when it gets created, then use it to render all cameras in a loop. CameraRenderer renderer = new CameraRenderer(); protected override void Render ( ScriptableRenderContext context, Camera[] cameras ) { foreach (Camera camera in cameras) { renderer.Render(context, camera); } } Our camera renderer is roughly equivalent to the scriptable renderers of the Universal RP. This approach will make it simple to support different rendering approaches per camera in the future, for example one for the first-person view and one for a 3D map overlay, or forward vs. deferred rendering. But for now we'll render all cameras the same way.

Drawing the Skybox The job of CameraRenderer.Render is to draw all geometry that its camera can see. Isolate that specific task in a separate DrawVisibleGeometry method for clarity. We'll begin by having it draw the default the skybox, which can be done by invoking DrawSkybox on the context with the camera as an argument. public void Render (ScriptableRenderContext context, Camera camera) { this.context = context; this.camera = camera; DrawVisibleGeometry(); } void DrawVisibleGeometry () { context.DrawSkybox(camera); } This does not yet make the skybox appear. That's because the commands that we issue to the context are buffered. We have to submit the queued work for execution, by invoking Submit on the context. Let's do this in a separate Submit method, invoked after DrawVisibleGeometry . public void Render (ScriptableRenderContext context, Camera camera) { this.context = context; this.camera = camera; DrawVisibleGeometry(); Submit(); } void Submit () { context.Submit(); } The skybox finally appears in both the game and scene window. You can also see an entry for it in the frame debugger when you enable it. It's listed as Camera.RenderSkybox, which has a single Draw Mesh item under it, which represents the actual draw call. This corresponds to the rendering of the game window. The frame debugger doesn't report drawing in other windows.

Skybox gets drawn. Note that the orientation of the camera currently doesn't affect how the skybox gets rendered. We pass the camera to DrawSkybox , but that's only used to determine whether the skybox should be drawn at all, which is controlled via the camera's clear flags. To correctly render the skybox—and the entire scene—we have to set up the view-projection matrix. This transformation matrix combines the camera's position and orientation—the view matrix—with the camera's perspective or orthographic projection—the projection matrix. It is known in shaders as unity_MatrixVP, one of the shader properties used when geometry gets drawn. You can inspect this matrix in the frame debugger's ShaderProperties section when a draw call is selected. At the moment, the unity_MatrixVP matrix is always the same. We have to apply the camera's properties to the context, via the SetupCameraProperties method. That sets up the matrix as well as some other properties. Do this before invoking DrawVisibleGeometry , in a separate Setup method. public void Render (ScriptableRenderContext context, Camera camera) { this.context = context; this.camera = camera; Setup(); DrawVisibleGeometry(); Submit(); } void Setup () { context.SetupCameraProperties(camera); } Skybox, correctly aligned.

Command Buffers The context delays the actual rendering until we submit it. Before that, we configure it and add commands to it for later execution. Some tasks—like drawing the skybox—can be issued via a dedicated method, but other commands have to be issued indirectly, via a separate command buffer. We need such a buffer to draw the other geometry in the scene. To get a buffer we have to create a new CommandBuffer object instance. We need only one buffer, so create one by default for CameraRenderer and store a reference to it in a field. Also give the buffer a name so we can recognize it in the frame debugger. Render Camera will do. const string bufferName = "Render Camera"; CommandBuffer buffer = new CommandBuffer { name = bufferName }; How does that object initializer syntax work? It's as if we've written buffer.name = bufferName; as a separate statement after invoking the constructor. But when creating a new object, you can append a code block to the constructor's invocation. Then you can set the object's fields and properties in the block without having to reference the object instance explicitly. It makes explicit that the instances should only be used after those fields and properties have been set. Besides that, it makes initialization possible where only a single statement is allowed—for example a field initialization, which we're using here—without requiring constructors with many parameter variants. Note that we omitted the empty parameter list of the constructor invocation, which is allowed when object initializer syntax is used. We can use command buffers to inject profiler samples, which will show up both in the profiler and the frame debugger. This is done by invoking BeginSample and EndSample at the appropriate points, which is at the beginning of Setup and Submit in our case. Both methods must be provided with the same sample name, for which we'll use the buffer's name. void Setup () { buffer.BeginSample(bufferName); context.SetupCameraProperties(camera); } void Submit () { buffer.EndSample(bufferName); context.Submit(); } To execute the buffer, invoke ExecuteCommandBuffer on the context with the buffer as an argument. That copies the commands from the buffer but doesn't clear it, we have to do that explicitly afterwards if we want to reuse it. Because execution and clearing is always done together it's handy to add a method that does both. void Setup () { buffer.BeginSample(bufferName); ExecuteBuffer(); context.SetupCameraProperties(camera); } void Submit () { buffer.EndSample(bufferName); ExecuteBuffer(); context.Submit(); } void ExecuteBuffer () { context.ExecuteCommandBuffer(buffer); buffer.Clear(); } The Camera.RenderSkyBox sample now gets nested inside Render Camera. Render camera sample.

Clearing the Render Target Whatever we draw ends up getting rendered to the camera's render target, which is the frame buffer by default but could also be a render texture. Whatever was drawn to that target earlier is still there, which could interfere with the image that we are rendering now. To guarantee proper rendering we have to clear the render target to get rid of its old contents. That's done by invoking ClearRenderTarget on the command buffer, which belongs in the Setup method. CommandBuffer.ClearRenderTarget requires at least three arguments. The first two indicate whether the depth and color data should be cleared, which is true for both. The third argument is the color used to clearing, for which we'll use Color.clear . void Setup () { buffer.BeginSample(bufferName); buffer.ClearRenderTarget(true, true, Color.clear); ExecuteBuffer(); context.SetupCameraProperties(camera); } Clearing, with nested sample. The frame debugger now shows a Draw GL entry for the clear action, which shows up nested in an additional level of Render Camera. That happens because ClearRenderTarget wraps the clearing in a sample with the command buffer's name. We can get rid of the redundant nesting by clearing before beginning our own sample. That results in two adjacent Render Camera sample scopes, which get merged. void Setup () { buffer.ClearRenderTarget(true, true, Color.clear); buffer.BeginSample(bufferName); //buffer.ClearRenderTarget(true, true, Color.clear); ExecuteBuffer(); context.SetupCameraProperties(camera); } Clearing, without nesting. The Draw GL entry represent drawing a full-screen quad with the Hidden/InternalClear shader that writes to the render target, which isn't the most efficient way to clear it. This approach is used because we're clearing before setting up the camera properties. If we swap the order of those two steps we get the quick way to clear. void Setup () { context.SetupCameraProperties(camera); buffer.ClearRenderTarget(true, true, Color.clear); buffer.BeginSample(bufferName); ExecuteBuffer(); //context.SetupCameraProperties(camera); } Correct clearing. Now we see Clear (color+Z+stencil), which indicates that both the color and depth buffers get cleared. Z represents the depth buffer and the stencil data is part the same buffer.

Culling We're currently seeing the skybox, but not any of the objects that we put in the scene. Rather than drawing every object, we're only going to render those that are visible to the camera. We do that by starting with all objects with renderer components in the scene and then culling those that fall outside of the view frustum of the camera. Figuring out what can be culled requires us to keep track of multiple camera settings and matrices, for which we can use the ScriptableCullingParameters struct. Instead of filling it ourselves, we can invoke TryGetCullingParameters on the camera. It returns whether the parameters could be successfully retrieved, as it might fail for degenerate camera settings. To get hold of the parameter data we have to supply it as an output argument, by writing out in front of it. Do this in a separate Cull method that returns either success or failure. bool Cull () { ScriptableCullingParameters p if (camera.TryGetCullingParameters(out p)) { return true; } return false; } Why do we have to write out ? When a struct parameter is defined as an output parameter it acts like an object reference, pointing to the place on the memory stack where the argument resides. When the method changes the parameter it affects that value, not a copy. The out keyword tells us that the method is responsible for correctly setting the parameter, replacing the previous value. Try-get methods are a common way to both indicate success or failure and produce a result. It is possible to inline the variable declaration inside the argument list when used as an output argument, so let's do that. bool Cull () { //ScriptableCullingParameters p if (camera.TryGetCullingParameters(out ScriptableCullingParameters p)) { return true; } return false; } Invoke Cull before Setup in Render and abort if it failed. public void Render (ScriptableRenderContext context, Camera camera) { this.context = context; this.camera = camera; if (!Cull()) { return; } Setup(); DrawVisibleGeometry(); Submit(); } Actual culling is done by invoking Cull on the context, which produces a CullingResults struct. Do this in Cull if successful and store the results in a field. In this case we have to pass the culling parameters as a reference argument, by writing ref in front of it. CullingResults cullingResults; … bool Cull () { if (camera.TryGetCullingParameters(out ScriptableCullingParameters p)) { cullingResults = context.Cull(ref p); return true; } return false; } Why do we have to use ref ? The ref keyword works just like out , except that the method is not required to assign something to it. Whoever invokes the method is responsible for properly initializing the value first. So it can be used for input and optionally for output. In this case ref is used as an optimization, to prevent passing a copy of the ScriptableCullingParameters struct, which is quite large. It being a struct instead of an object is another optimization, to prevent memory allocations.

Drawing Geometry Once we know what is visible we can move on to rendering those things. That is done by invoking DrawRenderers on the context with the culling results as an argument, telling it which renderers to use. Besides that, we have to supply drawing settings and filtering settings. Both are structs— DrawingSettings and FilteringSettings —for which we'll initially use their default constructors. Both have to be passed by reference. Do this in DrawVisibleGeometry , before drawing the skybox. void DrawVisibleGeometry () { var drawingSettings = new DrawingSettings(); var filteringSettings = new FilteringSettings(); context.DrawRenderers( cullingResults, ref drawingSettings, ref filteringSettings ); context.DrawSkybox(camera); } We don't see anything yet because we also have to indicate which kind of shader passes are allowed. As we only support unlit shaders in this tutorial we have to fetch the shader tag ID for the SRPDefaultUnlit pass, which we can do once and cache it in a static field. static ShaderTagId unlitShaderTagId = new ShaderTagId("SRPDefaultUnlit"); Provide it as the first argument of the DrawingSettings constructor, along with a new SortingSettings struct value. Pass the camera to the constructor of the sorting settings, as it's used to determine whether orthographic or distance-based sorting applies. void DrawVisibleGeometry () { var sortingSettings = new SortingSettings(camera); var drawingSettings = new DrawingSettings( unlitShaderTagId, sortingSettings ); … } Besides that we also have to indicate which render queues are allowed. Pass RenderQueueRange.all to the FilteringSettings constructor so we include everything. var filteringSettings = new FilteringSettings( RenderQueueRange.all );

Drawing unlit geometry. Only the visible objects that use the unlit shader get drawn. All the draw calls are listed in the frame debugger, grouped under RenderLoop.Draw. There's something weird going on with transparent objects, but let's first look at the order in which the objects are drawn. That's shown by the frame debugger and you can step through the draw calls by selecting one after the other or using the arrow keys. Stepping through the frame debugger. The drawing order is haphazard. We can force a specific draw order by setting the criteria property of the sorting settings. Let's use SortingCriteria.CommonOpaque . var sortingSettings = new SortingSettings(camera) { criteria = SortingCriteria.CommonOpaque } ;

Common opaque sorting. Objects now get more-or-less drawn front-to-back, which is ideal for opaque objects. If something ends up drawn behind something else its hidden fragments can be skipped, which speeds up rendering. The common opaque sorting option also takes some other criteria into consideration, including the render queue and materials.