This is the tenth installment of a tutorial series covering Unity's scriptable render pipeline. It adds support for cross-fading LOD groups and shader variant stripping.

This tutorial is made with Unity 2018.4.4f1.

This works the same as using separate child hierarchies per LOD level, except that some objects are part of for multiple levels.

Another way to create a LOD group it to add details to a base visualization. As an example, I created an abstract tree from cubes and spheres. The core of the tree is added to all three LOD levels. Smaller branches, leaves, and bark are added to the first two levels. And the smallest leaves and bark details are added only to LOD 0.

Yes. When you mark a LOD group as static it still switches between LOD levels, so static batching doesn't apply to it. But it does get included for lightmapping purposes. LOD 0 is used for lightmapping as expected, and besides that all other LOD levels get baked lighting as well. At least, that is the case when the progressive lightmapper is used. Enlighten has more trouble with the other LOD levels, requiring light probes. This also means that only static LOD 0 works with dynamic global illumination. If dynamic GI is important then you should make sure that other LOD levels aren't static, so they receive GI via light probes.

You can now see the LOD selection in action, either by moving the camera or adjusting the LOD bias.

Typically an object has multiple LOD levels, each using a progressively simpler mesh. To clearly see different LOD levels being used, duplicate the sphere child twice to create LOD levels 1 and 2, and give each a different color. Then add them to the LOD group, for example at the 15% and 10% thresholds, shifting complete culling to 5%.

You can adjust the thresholds by dragging them and can also add or remove levels via a popup menu by right-clicking them. As we only have a single LOD level, remove the other two. This means that we always show the sphere, until its visual size drops below 10%. At least, that's the case when there is no LOD bias. There is a global LOD bias that can be used to adjust all LOD thresholds. It can be set via code and via the Quality panel of the project settings. For example setting Lod Bias to 1.5 means that the visual size of objects are overestimated by the same factor, so that our spheres only get culled when they drop below 6.7%. The inspector of the LOD group will indicate that a bias is in effect.

The visual level of detail of an object can be controlled by adding a LOD Group component to a game object's root. It has three LOD levels by default. The displayed percentages correspond to the estimated visual size of the object, expressed as how much of the viewport it covers, vertically. As long as that stays above 60% then LOD 0 is used, otherwise it switches to a lower LOD level, until the object gets culled completely below 10%. Drag the sphere child onto the LOD 0 box, so its rendered gets used for the LOD 0 visualization.

The typical approach to create a level-of-detail object is to use a root object with a child object for each detail level. The most detailed or complete visualization level is known as LOD 0. As an example, let's create a prefab that has a single sphere child. As always, we use our own material, and also use an InstancedMaterialProperties component to give it an obvious color, like red.

Ideally, we render as little as possible. The fewer gets rendered, the less strain there is on the GPU, which means that we can get a higher frame rate and require less energy to render the scene. If something becomes so visually small that it is no longer visible—smaller than a single pixel—then we can skip rendering it. It is also possible to skip things when they would still be visible, but small enough that their absense would go mostly unnoticed. Thus, we can control the level of detail of our scene.

LOD Blending

When an object switches from one LOD level to another there is a sudden swap or removal of renderers, which can be visually obvious and jarring. The transition can be made more gradual by blending between adjacent LOD levels.

Cross-Fading LOD blending is controller per LOD group and individual LOD level. First, set the group's Fade Mode to Cross Fade. That will make an Animate Cross-fading toggle option appear, which allows you to choose between fading based on percentage or time. When enabled, a timed transition will happen when a LOD change should occur, which will last only a short while even if the object's visual size no longer changes. The transition duration can be globally set via LODGroup.crossFadeAnimationDuration and is half a second by default. When disabled the cross-fade is based on the visual percentage and the exact range can be configured per LOD level, via their Fade Transition Width slider. When set to 1 the cross-fade will cover the entire range of the LOD level. That will make the transition most gradual but also means that transitions are used all the time. It's better to avoid that, because during a transition both LOD levels need to be rendered. Cross-fading across entire LOD range. What about the Speed Tree fade mode option? That mode is specifically for SpeedTree trees, which uses its own LOD system to collapse trees and transition between 3D models and billboard representations. When cross-fading is being used Unity will select a shader variant with the LOD_FADE_CROSSFADE keyword, so add a multi-compile directive for it to the normal pass of our shader. #pragma multi_compile _ LOD_FADE_CROSSFADE To check whether fading is indeed used, make all fading fragments solid black in Lit.hlsl. float4 LitPassFragment ( VertexOutput input, FRONT_FACE_TYPE isFrontFace : FRONT_FACE_SEMANTIC ) : SV_TARGET { UNITY_SETUP_INSTANCE_ID(input); #if defined(LOD_FADE_CROSSFADE) return 0; #endif … } Black spheres. When all fade ranges are set to 1, this will make every sphere solid black, except those that end up visually larger than the viewport. In contrast, the trees that use additive LOD levels are only partially black with the same settings. Objects that are part of both LOD levels aren't included in the cross-fade and are rendered as normal. Partially black trees. How much an object should be faded is made available via the first component of the unity_LODFade vector, which is part of the UnityPerDraw buffer. CBUFFER_START(UnityPerDraw) float4x4 unity_ObjectToWorld, unity_WorldToObject; float4 unity_LODFade; … CBUFFER_END Returning that instead of solid black allows us to see the blend factor being used, although we can only see one of the two being used per fragment, due to overdraw. Transitions from the lowest LOD level to being clipped involve only a single object, so there is no overdraw in that case. #if defined(LOD_FADE_CROSSFADE) return unity_LODFade.x ; #endif Blend factors.

Screen-Space Position In case of transparent geometry we could use the blend factor to fade out, but that isn't possible for opaque geometry. What we can do instead is clip a portion of the fragments based on the blend factor, just like cutout rendering. That works for both opaque and transparent geometry. But the fade factor is the same for all fragments rendered for an object, so only using that as a threshold for clipping would still produce a sudden transition. So we have to add variety to the clip threshold per fragment. The simplest way to add variety per fragment is to base it on the fragment's screen-space position. Begin by directly using its XY components as the result of LitPassFragment . float4 LitPassFragment ( VertexOutput input, FRONT_FACE_TYPE isFrontFace : FRONT_FACE_SEMANTIC ) : SV_TARGET { UNITY_SETUP_INSTANCE_ID(input); #if defined(LOD_FADE_CROSSFADE) return float4(input.clipPos.xy, 0, 0); #endif … The XY coordinates are provided as fragment indices, so that will make everything white. To get a sensible result, take some modulo of the screen-space position and divide that by the same value. Let's use 64. #if defined(LOD_FADE_CROSSFADE) return float4( ( input.clipPos.xy % 64) / 64 , 0, 0); #endif Screen-space UV coordinates. The result is a grid filled with red-green gradient squares that repeat every 64 pixels. As it is relative to the screen, the pattern is always the same, even if the spheres visually change. We can use these coordinates to perform screen-space texture sampling.

Clipping Let's create a separate method to clip based on LOD cross-fading. In it, clip just like for alpha-clipping, except based on the fade factor minus a bias instead of alpha minus the cutoff. Initially, use a 16-pixel vertical gradient for the bias. void LODCrossFadeClip (float4 clipPos) { float lodClipBias = (clipPos.y % 16) / 16; clip(unity_LODFade.x - lodClipBias); } float4 LitPassFragment ( VertexOutput input, FRONT_FACE_TYPE isFrontFace : FRONT_FACE_SEMANTIC ) : SV_TARGET { UNITY_SETUP_INSTANCE_ID(input); #if defined(LOD_FADE_CROSSFADE) //return float4((input.clipPos.xy % 64) / 64, 0, 0); LODCrossFadeClip(input.clipPos); #endif … } Clipping based on a tiled gradient. We end up cutting horizontal bars out of our spheres. In some cases we can see part of both LOD levels, but even then parts are missing. That happens because when one LOD level clips, the other shouldn't, but right now they're independent. We have to make the bias symmetrical, which we can do by flipping it when the fade factor drops below 0.5. float lodClipBias = (clipPos.y % 16) / 16; if (unity_LODFade.x < 0.5) { lodClipBias = 1.0 - lodClipBias; } clip(unity_LODFade.x - lodClipBias); Symmetrical bias. A downside of flipping the bias is that there is now an obvious visual change at the halfway point. This can also cause pattern interference when separate but visually overlapping objects flip at different times. In case of objects transitioning to getting culled, their visual intersection could become fully opaque. Inconsistent pattern due to flipping. We cannot avoid this until Unity provides additional data to the shader that allows us to identify which of the LOD levels is being rendered. Then we could always flip one side instead of doing it halfway for both sides. One way to do this is by always making one of the two fade factors negative, which might be done in a future version of Unity 2019.

Dithering Using an obvious pattern for the bias is not a good idea. Instead, let's use a mostly uniform noise texture to perform dithering, which you can find here. 64×64 blue noise. Where did you get that texture? It's a blue noise pattern made by Christoph Peters. See his Free blue noise textures blog post for more details. All four channels of the texture contain the same data. Import it as an uncompressed single-channel texture, set to alpha. Also set its Filter Mode to Point, because we use the exact pixel values and don't need any interpolation. Thus it also doesn't need mip maps. Texture import settings. Add a texture field to MyPipelineAsset so we can assing the dither pattern to our asset. [SerializeField] Texture2D ditherTexture = null; Pipeline with dither texture. Then pass it to the constructor invocation of MyPipeline . return new MyPipeline( dynamicBatching, instancing, ditherTexture, (int)shadowMapSize, shadowDistance, shadowFadeRange, (int)shadowCascades, shadowCascadeSplit ); In MyPipeline , keep track of the texture. Texture2D ditherTexture; public MyPipeline ( bool dynamicBatching, bool instancing, Texture2D ditherTexture, int shadowMapSize, float shadowDistance, float shadowFadeRange, int shadowCascades, Vector3 shadowCascasdeSplit ) { … this.ditherTexture = ditherTexture; this.shadowMapSize = shadowMapSize; … } Configure the dither pattern before rendering the cameras. This means setting the texture, and we'll also set its scale-transform data globally. We assume it is a 64×64 texture, so the UV scale becomes one divided by 64. We can use the camera buffer to do this. static int ditherTextureId = Shader.PropertyToID("_DitherTexture"); static int ditherTextureSTId = Shader.PropertyToID("_DitherTexture_ST"); … public override void Render ( ScriptableRenderContext renderContext, Camera[] cameras ) { base.Render(renderContext, cameras); ConfigureDitherPattern(renderContext); foreach (var camera in cameras) { Render(renderContext, camera); } } void ConfigureDitherPattern (ScriptableRenderContext context) { cameraBuffer.SetGlobalTexture(ditherTextureId, ditherTexture); cameraBuffer.SetGlobalVector( ditherTextureSTId, new Vector4(1f / 64f, 1f / 64f, 0f, 0f) ); context.ExecuteCommandBuffer(cameraBuffer); cameraBuffer.Clear(); } On the shader side, we'll simply add the scale-transform to the UnityPerFrame buffer. Also define the texture and sample it with the transformed screen position to determine the clip bias used for cross-fading. CBUFFER_START(UnityPerFrame) float4x4 unity_MatrixVP; float4 _DitherTexture_ST; CBUFFER_END … TEXTURE2D(_DitherTexture); SAMPLER(sampler_DitherTexture); … void LODCrossFadeClip (float4 clipPos) { float2 ditherUV = TRANSFORM_TEX(clipPos.xy, _DitherTexture); float lodClipBias = SAMPLE_TEXTURE2D(_DitherTexture, sampler_DitherTexture, ditherUV).a ; if (unity_LODFade.x < 0.5) { lodClipBias = 1.0 - lodClipBias; } clip(unity_LODFade.x - lodClipBias); } Dithered cross-fading. Because the dither pattern is sampled at the window's resolution it might be hard to see on high-resolution displays and screenshots. You can scale up the game game view to get a better look at it. Dithering zoomed ×4. Why use a texture instead of LODDitheringTransition ? The Core library contains the LODDitheringTransition function, which clips based on a 3D seed value and fade factor. It uses the seed to generate a hash value which is then used for clipping. While a hash-based approach can work, I have found this particular implementation unreliable, which manifests as pixel-sized holes and unstable results in some cases, at least for the Metal API. The HDRP pipeline bases the seed on the view direction, which has precision issues that aggravate this problem, but changing it to use the screen-space position doesn't solve all issues. In contrast, using a screen-space texture always works.

Cross-Fading Shadows We can apply the same technique to shadows. The LOD is chosen during culling, so the LOD of objects and their shadows match. First, also add the multi-compile directive for LOD_FADE_CROSSFADE to the shadow caster pass. #pragma shader_feature _CLIPPING_OFF #pragma multi_compile _ LOD_FADE_CROSSFADE Then add the required data to ShadowCaster.hlsl. CBUFFER_START(UnityPerFrame) float4x4 unity_MatrixVP; float4 _DitherTexture_ST; CBUFFER_END CBUFFER_START(UnityPerDraw) float4x4 unity_ObjectToWorld; float4 unity_LODFade; CBUFFER_END … TEXTURE2D(_DitherTexture); SAMPLER(sampler_DitherTexture); Then copy LODCrossFadeClip and invoke it when appropriate in ShadowCasterPassFragment . void LODCrossFadeClip (float4 clipPos) { … } float4 ShadowCasterPassFragment (VertexOutput input) : SV_TARGET { UNITY_SETUP_INSTANCE_ID(input); #if defined(LOD_FADE_CROSSFADE) LODCrossFadeClip(input.clipPos); #endif … } Dithered cross-fading shadows. In the case of shadows the dithering is aligned with the shadow camera. Thus the dither pattern used for directional shadows moves differently than the one for the regular camera. The pattern for spotlight shadows only changes when the spotlight itself moves or rotates. But due to shadow filtering the pattern can get smudged.