This is the twelfth installment of a tutorial series covering Unity's scriptable render pipeline. It's about improving image quality, by adjusting the render scale, applying MSAA, and rendering to HDR buffers in combination with tone mapping.

This tutorial is made with Unity 2018.4.6f1.

Increasing the render scale further won't improve image quality. At 3 we end up with the same result as render scale 1, while at 4 we're back at 2×2 blocks per pixel but closer together. That's because a single bilinear blit can only average four pixels. Taking advantage of higher scales would require a pass that performs more than one texture sample per fragment. While that's possible it's impractical because the required work scales quadratically with the render scale. SSAA 4× would require use to render sixteen times as many pixels.

The image quality indeed improves, but is only really good when the scale is set to 2. At this scale we end up averaging a dedicated 2×2 pixel blocks for each final pixel. This means that we're rendering four times as many pixels, which is the same as supersampling anti-aliasing, SSAA 2× using a regular grid.

And also activate scaled rendering in MyPipeline.Render if the render scale is greater than 1.

We can scale down to improve performance at the cost of image quality. We can do the opposite as well: scale up to improve image quality at the cost of performance. To make this possible increase the maximum render scale to 2 in MyPipelineAsset .

Other render scales produce pixels with varying weight configurations, because the distance from source to target pixel varies, in a regular pattern depending on the scale.

A render scale of 0.5 is most straightforward: we end up with a single pixel per block of 2×2 target pixels. Each final pixel uses the same four weights for interpolation, but there are four possible orientations.

Adjusting the render scale affects everything that our pipeline renders, except shadows as they have their own size. A slight reduction of the render scale seems to apply a bit of anti-aliasing, although haphazardly. But further reduction makes it clear that this is just a loss of detail that gets smudged due to bilinear interpolation when blitting to the final render target.

The same is true for RenderAfterTransparent . Now we must always release the textures when we're rendering to them but only invoke RenderAfterTransparent when a stack is in use. If not we can use a regular blit to copy the scaled texture to the camera's target.

From now on the adjusted width and height must be passed to the active stack, when RenderAfterOpaque gets invoked.

We must now render to an intermediate texture when either scaled rendering or post-processing is used. Keep track of this with a boolean as well and use the adjusted width and height when getting the textures.

Keep track of the render width and height in variables as well. They're determined by the camera by default, but must be adjusted when using scaled rendering.

When rendering a camera in Render , determine whether we're using scaled rendering before we create render textures in case we have an active stack. We use scaled rendering when the render scale has been reduced, but only do so for a game camera, so the scene, preview, and other cameras remain unaffected. Keep track of this decision with a boolean variable so we refer back to it.

Add a slider for the render scale to MyPipelineAsset , initially with a range of ¼–1. Reducing the resolution to a quarter drops the quality by a lot—pixel count gets divided by 16—and is most likely unacceptable, unless the original resolution is very high.

The camera determines the width and height of the image that gets rendered, that's out of control of the pipeline. But we can do whatever we want before rendering to the camera's target. We can render to intermediate textures, which we can give any size we like. For example we could render everything to a smaller texture, followed by a final blit to the camera's target to scale it up to the desired size. That reduces the image quality, but speeds up rendering because there are fewer fragment to process. The Lightweight/Universal pipeline has a Render Scale option to support this, so let's add it to our own pipeline as well.

MSAA

An alternative to SSAA is MSAA: multi-sample anti-aliasing. The idea is the same, but the execution differs. MSAA keeps track of multiple samples per pixel, which don't have to placed in a regular grid. The big difference is that the fragment program is only invoked once per primitive per fragment, so at the original resolution. The result is then copied to all subsamples that are covered by the rasterized triangle. This significantly reduces the amount of work that has to be done, but it means that MSAA only affects triangle edges and nothing else. High-frequency surface patterns and alpha-clipped edges remain aliased.

What about alpha-to-coverage? That's a trick to smooth alpha-clipped edges somewhat, which can produce decent results in some cases. It won't be covered in this tutorial.

Configuration Add an option to select the MSAA mode to MyPipelineAsset . By default MSAA is off, the other options being 2×, 4×, and 8×, which can be represented with an enum. The enum values represent the amount of samples per pixel, so the default is 1. public enum MSAAMode { Off = 1, _2x = 2, _4x = 4, _8x = 8 } … [SerializeField] MSAAMode MSAA = MSAAMode.Off; MSAA mode. Why not support MSAA 16×? You can do that, but it is very expensive for little extra quality gain compared to 8×, and doesn't have widespread support. Pass the amount of samples per pixel to the pipeline instance. return new MyPipeline( … renderScale , (int)MSAA ); And keep track of it in MyPipeline . int msaaSamples; public MyPipeline ( … float renderScale , int msaaSamples ) { … this.msaaSamples = msaaSamples; } Not all platforms support MSAA and the maximum sample count also varies. Going above the maximum could result in a crash, so we have to make sure that we remain within the limit. We can do that by assigning the sample count to QualitySettings.antiAliasing . Our pipeline doesn't use this quality setting, but it takes care of enforcing the limit when assigned to it. So after assigning to it we copy it back to our own sample count. The only thing we have to be aware of is that it will yield zero when MSAA is unsupported, which we have to convert to a sample count of 1. QualitySettings.antiAliasing = msaaSamples; this.msaaSamples = Mathf.Max(QualitySettings.antiAliasing, 1) ;

Multisampled Render Textures MSAA support is set per camera, so keep track of the samples used for rendering in Render and force it to 1 if the camera doesn't have MSAA enabled. Then if we end up with more than one sample per pixel we have to render to intermediate multi-sampled textures, MS textures for short. int renderSamples = camera.allowMSAA ? msaaSamples : 1; bool renderToTexture = scaledRendering || renderSamples > 1 || activeStack; To configure the render textures correctly we have to add two more arguments to GetTemporaryRT . First the read-write mode, which is the default for the color buffer and is linear for the depth buffer. The next argument is the sample count. if (renderToTexture) { cameraBuffer.GetTemporaryRT( cameraColorTextureId, renderWidth, renderHeight, 0, FilterMode.Bilinear , RenderTextureFormat.Default, RenderTextureReadWrite.Default, renderSamples ); cameraBuffer.GetTemporaryRT( cameraDepthTextureId, renderWidth, renderHeight, 24, FilterMode.Point, RenderTextureFormat.Depth , RenderTextureReadWrite.Linear, renderSamples ); … } Try this out with all post-processing disabled. MSAA 2×, 4×, 8×, plus no MSAA with render scale 2 for comparison. Does MSAA work with directional shadows? It works fine for our render pipeline. Unity's pipelines have trouble because they use a screen-space pass for cascaded directional shadows. We'll encounter the same kind of problem a bit further in this tutorial. Compared to doubling the render scale, MSAA 4× ends up slightly better than render scale 2, with the caveat that the render scale affects everything and not just geometry edges. You could also combine both approaches. For example MSAA 4× at render scale 2 is roughly comparable to solely MSAA 8× at render scale 1, although it uses sixteen samples instead of eight per final pixel. MSAA 4× and 8× both at render scale 2.

Resolving MS Textures While we can render directly to MS textures, we cannot directly read from them the normal way. If we want to sample a pixel it must first be resolved, which means averaging all samples to arrive at the final value. The resolve happens for the entire texture at once in a special Resolve Color pass, which gets inserted automatically before a pass that samples it. Resolving color before final blit. Resolving the MS texture creates a temporary regular texture which remains valid until something new gets rendered to the MS texture. So if we sample from and then render to the MS texture multiple times we end up with extra resolve passes for the same texture. You can see this when activating a post-effect stack with blurring enabled. At strength 5 we get three resolve passes. Resolving three times with blur strength 5. The additional resolve passes are useless, because our full-screen effects don't benefit from MSAA. To avoid needlessly rendering to an MS texture we can blit to an intermediate texture once and then use that instead of the camera target. To make this possible add a samples parameter to the RenderAfterOpaque and RenderAfterTransparent methods in MyPostProcessingStack . If blurring is enabled and MSAA is used then copy to a resolved texture and pass that to Blur . static int resolvedTexId = Shader.PropertyToID("_MyPostProcessingStackResolvedTex"); … public void RenderAfterOpaque ( CommandBuffer cb, int cameraColorId, int cameraDepthId, int width, int height , int samples ) { … } public void RenderAfterTransparent ( CommandBuffer cb, int cameraColorId, int cameraDepthId, int width, int height , int samples ) { if (blurStrength > 0) { if (samples > 1) { cb.GetTemporaryRT( resolvedTexId, width, height, 0, FilterMode.Bilinear ); Blit(cb, cameraColorId, resolvedTexId); Blur(cb, resolvedTexId, width, height); cb.ReleaseTemporaryRT(resolvedTexId); } else { Blur(cb, cameraColorId, width, height); } } else { Blit(cb, cameraColorId, BuiltinRenderTextureType.CameraTarget); } } Add the render samples as arguments in MyPipeline.Render . activeStack.RenderAfterOpaque( postProcessingBuffer, cameraColorTextureId, cameraDepthTextureId, renderWidth, renderHeight , renderSamples ); … activeStack.RenderAfterTransparent( postProcessingBuffer, cameraColorTextureId, cameraDepthTextureId, renderWidth, renderHeight , renderSamples ); The three resolve passes are now reduced to one, plus a simple blit. Resolving once with blur strength 5.

No Depth Resolve Color samples are resolved by averaging them, but this doesn't work for the depth buffer. Averaging adjacent depth values makes no sense and there is no universal approach that can be used, so multisampled depth doesn't get resolved at all. As as result the depth stripes effect doesn't work when MSAA is enabled. The naive approach to get the effect working again is to simply not apply MSAA to the depth texture when depth stripes are enabled. First add a getter property to MyPostProcessingStack that indicates whether it needs to read from a depth texture. This is only required when the depth stripes effect is used. public bool NeedsDepth { get { return depthStripes; } } Now we can keep track of whether we need an accessible depth texture in MyPipeline.Render . Only when we need depth do we have to get a separate depth texture, otherwise we can make do with setting the depth bits of the color texture. And if we do need a depth texture then let's explicitly always set its samples to 1 to disable MSAA for it. bool needsDepth = activeStack && activeStack.NeedsDepth; if (renderToTexture) { cameraBuffer.GetTemporaryRT( cameraColorTextureId, renderWidth, renderHeight, needsDepth ? 0 : 24 , FilterMode.Bilinear, RenderTextureFormat.Default, RenderTextureReadWrite.Default, renderSamples ); if (needsDepth) { cameraBuffer.GetTemporaryRT( cameraDepthTextureId, renderWidth, renderHeight, 24, FilterMode.Point, RenderTextureFormat.Depth, RenderTextureReadWrite.Linear, 1 ); cameraBuffer.SetRenderTarget( cameraColorTextureId, RenderBufferLoadAction.DontCare, RenderBufferStoreAction.Store, cameraDepthTextureId, RenderBufferLoadAction.DontCare, RenderBufferStoreAction.Store ); } else { cameraBuffer.SetRenderTarget( cameraColorTextureId, RenderBufferLoadAction.DontCare, RenderBufferStoreAction.Store ); } } This also affects setting the render target after drawing opaque effects. context.DrawSkybox(camera); if (activeStack) { … if (needsDepth) { cameraBuffer.SetRenderTarget( cameraColorTextureId, RenderBufferLoadAction.Load, RenderBufferStoreAction.Store, cameraDepthTextureId, RenderBufferLoadAction.Load, RenderBufferStoreAction.Store ); } else { cameraBuffer.SetRenderTarget( cameraColorTextureId, RenderBufferLoadAction.Load, RenderBufferStoreAction.Store ); } context.ExecuteCommandBuffer(cameraBuffer); cameraBuffer.Clear(); } And which textures need to get released at the end. DrawDefaultPipeline(context, camera); if (renderToTexture) { … cameraBuffer.ReleaseTemporaryRT(cameraColorTextureId); if (needsDepth) { cameraBuffer.ReleaseTemporaryRT(cameraDepthTextureId); } } Depth stripes with MSAA 8×. Depth stripes now show up when MSAA is enabled, but anti-aliasing appears to be broken. This happened because depth information is no longer affected by MSAA. We need a different approach.