This is the sixth installment of a tutorial series covering Unity's scriptable render pipeline. It's about adding support for alpha clipping and semi-transparent materials.

This tutorial is made with Unity 2018.3.0f2.

Besides potentially discarding fragments, alpha-clipped rendering works the same as opaque rendering and both can be mixed without issue. But because alpha clipping prevents some GPU optimizations it is typical to first render all purely opaque objects before rendering all alpha-clipped objects. That enables the most GPU optimization, potentially limits the amount of alpha-clipped fragments as more end up hidden behind opaque geometry, and can also reduce the amount of batches. All this can be done by simply setting the alpha-clipped materials to use a later render queue. The default material inspector exposes the queue, so we can manually change it. The default queue for alpha-clipped materials is 2450, corresponding to the AlphaTest option from the dropdown menu.

Note that we could optimize further by eliminating the UV coordinates too, but that optimization is less important so I won't cover that. Likewise, you could use a shader feature to only check the triangle facing when clipping is off, which is another optimization I skip.

Now we can make sure that we only clip in Lit if the _CLIPPING keyword is defined.

The only reason to use the multi-compile approach is when which keywords are enabled changes during play. An example of this are the shadow keywords, but can also be true if you configure materials during play.

All multi-compile shader variants have to always be included, both when shaders are compiled in the editor and when putting them in a build. The shader feature alternative only includes the variants that are actually needed, as far as the Unity editor can determine. This can significantly reduce shader compilation time and the size of builds.

We can now add another multi-compile statement, but the expectation is that this toggle won't change during play but only when editing material assets. So we don't need to always generate shader variants for both options. We can do that by using the #pragma shader_feature directive instead. In case of a single toggle keyword, we can suffice with just listing that keyword and nothing else. Do this for both passes.

Add a toggle property to control clipping to the shader. It has to be a float, with a default value of zero. Give it a Toggle attribute, which will make it show up as a checkbox. Besides that, the attribute can be supplied with a keyword that it enables or disables when the property is changed. We'll use the _CLIPPING keyword.

When alpha clipping is used the GPU can no longer assume that the entire triangle gets rendered, which makes some optimizations impossible. So it's best to only enable alpha clipping when necessary. So we'll create two shader variants: one with and one without clipping. We can do that with a shader keyword, like the pipeline controls whether shadows are used, except this time we'll control it via a material property.

You can avoid the problem by disabling instancing and batching when rendering shadows. But that shouldn't be necessary, because materials with a different clip mode likely have other relevant properties that differ too, which prevents them from getting batched. Just a slightly different cutoff value will prevent incorrect batching.

The clip mode always gets applied when rendering shadows. However, Unity aggressively batches shadow casters, even if objects have different materials, and ignores the clip mode when doing so. This means that when you mix shadow casters that are identical except for their clip mode, it is arbitrary which clip mode gets used for the entire batch. Because clip mode is set per shader, it cannot vary per instance.

The inside surface now gets shaded correctly, although it still ends up darker than the outside because of self-shadowing.

The GPU can tell the fragment program whether it's shading a fragment of a front or a back face. We can access this information by adding an additional parameter to LitPassFragment . The type and semantic of this parameter depend on the API, but we can use the FRONT_FACE_TYPE and FRONT_FACE_SEMANTIC macros from the Core library. Likewise, we can use the IS_FRONT_VFACE macro to choose between two alternatives based on whether we're dealing with a front or a back face. Use this to negate the normal vector when necessary.

It turns out that the lighting is flipped. What's lit should be dark, and vice versa. That's because the normal vectors are meant to be used for the outside, not the inside. So we have to negate the normal vectors when rendering back faces.

We're now seeing both sides of the geometry, but the inside isn't lit correctly. This is easiest to see by having our materials cull the front faces, so we only see the insides.

Although we have defined it as a shader property, the cull mode is not directly used by the shader programs. It's used by the GPU to decide which triangles are passed to the fragment programs and which are discarded. We control this via a Cull statement in the shader pass. If we used a fixed cull mode, them we could suffice with something like Cull Off , but we can also make it depend on our shader property, by writing Cull [_Cull] . Do this for both passes.

We can expose this property via an enum popup, by adding the Enum attribute to the property. The desired enum type can be supplied as an argument, which in this case is CullMode from the UnityEngine.Rendering namespace.

Which sides get rendered is controlled by a shader's cull mode. Either no culling takes place, all front-facing triangles are culled, or all back-facing triangles are culled. We can add a float shader property that represents an enum value, with 2 as the default, corresponding to the usual back-face culling.

Because only the front side of geometry gets rendered, our alpha-clipped objects are missing their back sides. This is obvious when rotating the view around them. Also, their shadows don't match what we see, because only the front side relative to the light source casts a shadow. The solution to this is to render both sides of the geometry, which allows us to see the inside of the object surfaces and makes the inside surface cast shadows.

Clipping shadows works exactly like clipping in the lit pass, so adjust the ShadowCaster include file accordingly. Because the final alpha value depends on both the main map and the material color, we now also have to sample the instanced color in ShadowCasterPassFragment , so we have to pass the instance ID along as well.

Objects with an alpa-clipped material are now rendered with holes in them. The size of the holes depends on the cutoff value. However, that's only true for the object surface itself. The shadows that they cast are still solid, because we haven't adjusted those yet.

Add the corresponding variable to the UnityPerMaterial buffer. Then invoke the clip function with the fragment's alpha value minus the threshold. That will cause all fragments that end up below the threshold to be discarded, which means that they don't get rendered.

Alpha clipping is done by discarding fragments when their alpha value falls below some cutoff threshold. The cutoff value lies between 0 and 1 and is configurable, so add a shader property for it, with ½ as the default.

We can now sample the main map in LitPassFragment with the SAMPLE_TEXTURE2D macro to retrieve the albedo and alpha data, which we when multiply with the color data. We'll also return the alpha value from now on. That's not needed right now, but will be used later.

To apply the tiling and offset of the texture, add the required _MainTex_ST shader variable, in a UnityPerMaterial buffer. Then we can use the TRANSFORM_TEX macro when transferring the UV coordinates in LitPassVertex .

We need UV texture coordinates for sampling, which are part of the mesh data. So add them to the vertex input and output structs.

In the Lit include file, add declarations for the main texture and its sampler state. This works like for the shadow map, but uses the TEXTURE2D and SAMPLER macros instead.

Create two new materials, one for lit alpha-clipped spheres and one for lit clipped squares, using the appropriate textures.

Add a main texture property to the Lit shader. We'll use is as the source for albedo and alpha, with solid white as the default.

Import these textures and indicate that their alpha channel represents transparency. Their RGB channels are uniform white so won't affects the material's appearance.

Alpha clipping is only useful when a material's alpha varies across its surface. The most straightforward way to achieve this is with an alpha map. Here are two textures for that, one for square geometry like quads and cubes, and one for spheres.

As explained in Rendering 11, Transparency , it's possible to cut holes in geometry by discarding fragments based on an alpha map. This technique is known as alpha clipping, alpha testing, or cutout rendering. Besides that, it's exactly the same as rendering opaque geometry. So to support alpha clipping we only have to adjust our shader.

Semi-Transparency

If a fragment doesn't get clipped it is fully opaque. So alpha-clipping can be used to cut holes in objects, but it cannot represent semi-transparent surfaces. We have some more work to do before our shader supports semi-transparency.

Blend Modes When something is semi-transparent, at least some of what's behind it shines through. To achieve that with a shader we have to change how a fragment's own color gets blended with the color that got rendered earlier. We can do that by changing the blend mode of the shader. The blend mode is controlled like the cull mode, but with two weighing options that are used to blend the new and old color. The first is known as the source—what we're rendering now—and the second as the destination—what was rendered before. For example, the default blend mode is Blend One Zero , which means that the new color completely replaces the old one. Aren't there also separate options for the alpha channel? Yes, but those are rarely used. Without explicitly specifying blends modes for alpha, all four channels are blended the same way. Add two shader properties for the source and destination blend, just like for culling, except with the BlendMode enum type. Set their default values to one and zero. [Enum(UnityEngine.Rendering.CullMode)] _Cull ("Cull", Float) = 2 [Enum(UnityEngine.Rendering.BlendMode)] _SrcBlend ("Src Blend", Float) = 1 [Enum(UnityEngine.Rendering.BlendMode)] _DstBlend ("Dst Blend", Float) = 0 Add a blend statement to the lit pass only. The ShadowCaster pass only cares about depth so blend modes don't affect it. Pass { Blend [_SrcBlend] [_DstBlend] Cull [_Cull] … } The simplest form of semi-transparency is fading a fragment based on its alpha value. That's done by using the source's alpha as the weight for the source and one minus the source's alpha as the weight for the destination. We can select those options from the dropdown menus. Do this for new fade materials, and also turn off culling for them.

Blend modes set for fading, with incorrect results. There are a lot of other blend modes too. Most are rarely used but some are used for different kinds of transparency. For example, pre-multiplied blending uses one for the source instead of the source's alpha. That makes it possible to keep specular reflections—to represent surfaces like glass—but requires some shader changes too which I won't cover here.

Transparent Render Queue Fading only works if there's already something behind what's getting rendered. Our pipeline already takes care of that, first rendering the opaque queues, then the skybox, and finally the transparent queues. Our fade materials just have to use the correct queue. The default Transparent option is fine.

Moved to transparent queue, still not correct.

Not Writing Depth Semi-transparency now sometimes works as it should, but also produces weird results. This is especially noticeable because we're still casting shadows as if the surfaces were opaque. This happens because we're not culling, so both sides of the surfaces get rendered. Which part gets rendered first depends on the triangle order of the mesh. When a font-facing triangle gets rendered first, there isn't a back side to blend with yet. And the back won't get rendered because it's behind something that already got rendered. The same problem also happens when two separate transparent objects are close to each other. Unity sorts transparent objects back-to-front, which is correct but can only consider the object position, not shape. Part of an object that's drawn first can still end up in front of an object that gets drawn later. For example, put two two mostly-overlapping quads in the scene, one a bit above the other, and adjust the view until the top one gets rendered first. The top quad gets rendered first. We cannot avoid this except by carefully controlling the placement of semitransparent objects or using materials with different render queues. In case of intersecting objects or a double-sided material with arbitrary triangle order, it will always go wrong. But what we can do is disable writing to the depth buffer for transparent materials. That way what gets rendered first will never block what gets rendered later. Add another float shader property to control Z writing, which is on by default. We could again use a toggle, but that will always produce a keyword, which we don't need in this case. So instead we'll make it a custom enumeration with an off and on state, by writing [Enum(Off€, 0, On€, 1)] . [Enum(UnityEngine.Rendering.BlendMode)] _DstBlend ("Dst Blend", Float) = 0 [Enum(Off€, 0, On€, 1)] _ZWrite ("Z Write", Float) = 1 Add a ZWrite control to the lit pass only, as once again this doesn't concern shadows. Blend [_SrcBlend] [_DstBlend] Cull [_Cull] ZWrite [_ZWrite]

Not writing to depth buffer. Now both quads get fully rendered, even when their draw order is incorrect. However, the bottom quad still gets drawn after the top quad, so it's still not correct. This is exacerbated by the solid shadows of the quads. It is also very obvious when the draw order flips. This is a limitation of transparent rendering that you have to keep in mind when designing a scene.

Double-Sided with Semi-Transparency With Z writing disabled, the insides of objects always get rendered when culling is off. However, the draw order is still determined by the triangle order of the mesh. This is guaranteed to produce incorrect results when using the default sphere and cube. Double-sided without writing to depth buffer. With an arbitrary mesh the only way to ensure that the back faces are drawn first is to duplicate the object and use two materials, one that culls front an another that culls back. Then adjust the render queues so that the inside is drawn first. Separate objects and materials for inside and outside. That works for an individual object, but not when multiple such objects are visually overlapping. In that case all outsides gets drawn on top of all insides. Cube outside on top of sphere inside.

Making a Double-Sided Mesh The best way to render double-sided semi-transparent surfaces is to use a mesh specifically created for this purpose. The mesh must contain separate triangles for its inside and outside, ordered so that the inside is drawn first. Even then, this only reliably works for concave objects that never visually overlap themselves. You can create a double-sided mesh with a separate 3D modeler, but we can also make a simple tool in Unity to quickly generate a double-sided variant of any source mesh. To do so, create a static DoubleSidedMeshMenuItem class and put its asset file in an Editor folder. We'll use to it add the Assets/Create/Double-Sided Mesh item to Unity's menu. That's done by adding the MenuItem attribute to a static method, with the desired item path as an argument. using UnityEditor; using UnityEngine; public static class DoubleSidedMeshMenuItem { [MenuItem("Assets/Create/Double-Sided Mesh")] static void MakeDoubleSidedMeshAsset () {} } The idea is that the user first selects a mesh and then activates the menu item, then we'll create its double-sided equivalent. So the first step is to get a reference to the selected mesh, which is done via Selection.activeObject . If there isn't a selected mesh, instruct the user to select one and abort. static void MakeDoubleSidedMeshAsset () { var sourceMesh = Selection.activeObject as Mesh; if (sourceMesh == null) { Debug.Log("You must have a mesh asset selected."); return; } } What does as do? It performs a cast to the specified type, if possible. Otherwise, the result is null . Note that this only works for reference types. We begin by creating the inside portion of the mesh. Clone the source mesh by instantiating it, retrieve its triangles, reverse their order via System.Array.Reverse , and assign the result back to it. That flips the facing of all triangles. if (sourceMesh == null) { Debug.Log("You must have a mesh asset selected."); return; } Mesh insideMesh = Object.Instantiate(sourceMesh); int[] triangles = insideMesh.triangles; System.Array.Reverse(triangles); insideMesh.triangles = triangles; Next, retrieve the normals, negate them, and assign them back. insideMesh.triangles = triangles; Vector3[] normals = insideMesh.normals; for (int i = 0; i < normals.Length; i++) { normals[i] = -normals[i]; } insideMesh.normals = normals; Then create a new mesh and invoke CombineMeshes on it. Its first argument is an array of CombineInstance structs, which just need a reference to relevant mesh. First comes the inside mesh, then the source mesh. That guarantees that the inside triangles get drawn first. After that come three boolean arguments. The first needs to be true , indicating that the meshes must be merged into a single mesh, instead of defining multiple sub-meshes. The other two refer to matrices and lightmap data, which we don't need. insideMesh.normals = normals; var combinedMesh = new Mesh(); combinedMesh.CombineMeshes( new CombineInstance[] { new CombineInstance { mesh = insideMesh }, new CombineInstance { mesh = sourceMesh } }, true, false, false ); Once that's done we no longer need the inside mesh, so destroy it immediately. combinedMesh.CombineMeshes( … ); Object.DestroyImmediate(insideMesh); Finally, create a mesh asset by invoking AssetDatabase.CreateAsset . Its first argument is the combined mesh and the second its asset path. We'll simply put it in the asset root folder and give it the same name as the source mesh with Double-Sided appended to it. The path and file name can be combined via the System.IO.Path.Combine method, so it works no matter which path separator your operating system uses. And we have to use asset as the file extension. Object.DestroyImmediate(insideMesh); AssetDatabase.CreateAsset( combinedMesh, System.IO.Path.Combine( "Assets", sourceMesh.name + " Double-Sided.asset" ) ); Now we can select any mesh and create a double-sided variant of it. You can select the default sphere or cube by selecting a game object that uses that mesh and double-clicking on its reference in the mesh renderer component. The resulting assets don't look like imported meshes because they're custom assets, but they work fine. So we can use those meshes for transparent objects and switch our fade materials to back-face culling.

Using double-sided meshes.

Alpha-Clipped Shadows Up to this point we have ignored shadows, so our semi-transparent objects still cast shadows are if they were opaque. They also receive shadows, but that's fine. Can transparent objects receive shadows? Yes. All that's needed to receive shadows is to determine whether there's a shadow caster between a fragment and the light source, which the shadow map tells us. Whether the fragment is for an opaque or transparent surface is irrelevant. Having said that, Unity doesn't support shadow-receiving for transparent surfaces in combination with cascaded shadow maps. That's because Unity samples the cascaded shadow map in a separate full-screen pass, which relies on the depth buffer, thus cannot work in combination with transparency. As we sample all shadows per fragment we don't have that limitation. Shadow maps cannot represent partial shadows. The best that we can do is use alpha-clipped shadows. Currently, alpha clipping can be enabled for a transparent material, but that also affects the surface itself. Both fading and clipping. It is possible to only perform alpha clipping for shadows. We can support that by replacing the clipping toggle with three options: off, on, and shadows. First, turn off clipping for all materials that current use it, so the _CLIPPING keyword gets cleared. Then replace the toggle with a KeywordEnum with the three options as arguments. //[Toggle(_CLIPPING)] _Clipping ("Alpha Clipping", Float) = 0 [KeywordEnum(Off€, On€, Shadows)] _Clipping ("Alpha Clipping", Float) = 0 Now you can turn clipping back on. We did that because KeywordEnum uses different keywords. The keywords that we now use are formed by taking the shader property name followed by an underscore and then each option separately, all uppercase. So in the lit pass we have to change our shader feature to rely on _CLIPPING_ON instead. //#pragma shader_feature _CLIPPING #pragma shader_feature _CLIPPING_ON Adjust the keyword check as well. #if defined( _CLIPPING_ON ) clip(albedoAlpha.a - _Cutoff); #endif The ShadowCaster pass must now use clipping when it's either on or set to shadows. In other words, it shouldn't clip when it's off. We'll use the latter criteria for the shader feature, so we rely solely on _CLIPPING_OFF. //#pragma shader_feature _CLIPPING #pragma shader_feature _CLIPPING_OFF So we must now check whether _CLIPPING_OFF is not defined. //#if defined(_CLIPPING) #if !defined(_CLIPPING_OFF) float alpha = SAMPLE_TEXTURE2D(_MainTex, sampler_MainTex, input.uv).a; alpha *= UNITY_ACCESS_INSTANCED_PROP(PerInstance, _Color).a; clip(alpha - _Cutoff); #endif This makes it possible for transparent materials to cast alpha-clipped shadows. It's not a perfect match, but it was easy to support and might be good enough in some cases.

Fading with shadow clipping; cutoff 0.75. You can turn off shadow casting per object if you don't want them. We'll also make that possible per material later. Isn't there a way to create semi-transparent shadows? Unity's legacy pipeline has an option to render semi-transparent shadows, which is described in Rendering 12, Semitransparent Shadows. It fakes semi-transparency by dithering shadows, clipping them based on alpha and a screen-space dither pattern, relying on filtering to smudge the results. It can produce convincing shadows in some limited cases, but in general the results are so bad that it's unusable.