This tutorial takes a look at how to create the FXAA post effect. It comes after the Depth of Field tutorial.

This tutorial is made with Unity 2017.3.0p3.

The scene view camera use the MSAA settings from the quality settings, it doesn't mimic the main camera in this case.

Attach our new effect as the only one to the camera. Once again, we assume that we're rendering in linear HDR space, so configure the project and camera accordingly. Also, because we perform our own anti-aliasing, make sure that MSAA is disabled.

We'll use the same setup for a new FXAA shader that we used for the DepthOfField shader. You can copy it and reduce it to a single pass that just performs a blit for now.

Multiple post-effect techniques have been developed. The first one was morphological anti-aliasing (MLAA). In this tutorial, we'll create our own version of fast approximate anti-aliasing (FXAA). It was developed by Timothy Lottes at NVIDIA and does exactly what its name suggests. Compared to MLAA, it trades quality for speed. While a common complaint of FXAA is that it blurs too much, that varies depending on which variant is used and how it is tuned. We'll create the latest version—FXAA 3.11—specifically the high-quality variant for PCs.

A third way to perform anti-aliasing is via a post effect. These are full-screen passes like any other effect, so they don't require a higher resolution but might rely on temporary render textures. These techniques have to work at the final resolution, so they have no access to actual subpixel data. Instead, they have to analyse the image and selectively blur based on that interpretation.

CSAA refers to coverage sampling anti-aliasing. It is a variant of MSAA, but I won't go into the details here.

MSAA works quite well and is used often, but it still requires a lot of memory and it doesn't combine with effects that depend on the depth buffer, like deferred rendering. That's why many games opt for different anti-aliasing techniques.

While SSAA works, it is a brute-force approach that is very expensive. Doubling the resolution quadruples the amount of pixels that both have to be stored in memory and shaded. Especially fill rate becomes a bottleneck. To mitigate this, multisample anti-aliasing (MSAA) was introduced. It also renders to a higher resolution and later downsamples, but changes how fragments are rendered. Instead of simply rendering all fragments of a higher-resolution block, it renders a single fragment per triangle that covers that block, effectively copying the result to the higher-resolution pixels. This keeps the fill rate manageable. It also means that only the edges of triangles are affected, everything else remains unchanged. That's why MSAA doesn't smooth the transparent edges created via cutout materials.

Supersampling anti-aliasing (SSAA) does exactly that. At minimum, the scene is rendered to a buffer with double the final resolution and blocks of four pixels are averaged to produce the final image. Even higher resolutions and different sampling patterns can be used to further improve the effect. This approach removes aliasing, but also slightly blurs the entire image.

The most straightforward way to get rid of aliasing is to render at a resolution higher than the display and downsample it. This is a spatial anti-aliasing method that makes it possible to capture and smooth out subpixel features that are too high-frequency for the display.

For this tutorial I've created a test scene similar to the one from Depth of Field . It contains areas of both high and low contrast, brighter and darker regions, multiple straight and curved edges, and small features. As usual, we're using HDR and linear color space. All scene screenshots are zoomed in to make individual pixels easier to distinguish.

Displays have a finite resolution. As a result, image features that do not align with the pixel grid suffer from aliasing. Diagonal and curved lines appear as staircases, commonly known as jaggies. Thin lines can become disconnected and turn into dashed lines. High-contrast features that are smaller than a pixel sometimes appear and sometimes don't, leading to flickering when things move, commonly known as fireflies. A collection of anti-aliasing techniques has been developed to mitigate these issues. This tutorial covers the classical FXAA solution.

To apply the FXAA effect, we have to sample luminance data. This is done by sampling the main texture and selecting either its green or alpha channel. We'll create some convenient functions for this, putting them all in a CGINCLUDE block at the top of the shader file.

We can use our existing pass for the luminance pass. The only change is that luminance should be stored in the alpha channel, keeping the original RGB data. The new FXAA pass starts out as a simple blit pass, with a multi-compile option for LUMINANCE_GREEN .

When we have to calculate luminance ourselves, we'll do this with a separate pass, storing the original RGB plus luminance data in a temporary texture. The actual FXAA pass then uses that texture instead of the original source. Furthermore, the FXAA pass needs to know whether it should use the green or alpha channel for luminance. We'll indicate this via the LUMINANCE_GREEN shader keyword.

Let's support both options too, but because we're not using a post effect stack let's also support calculating luminance ourselves. Add an enumeration field to FXAAEffect to control this and set it to Calculate in the inspector.

FXAA doesn't calculate luminance itself. That would be expensive, because each pixel requires multiple luminance samples. Instead, the luminance data has to be put in the alpha channel by an earlier pass. Alternatively, FXAA can use green as luminance instead, for example when the alpha channel cannot be used for some reason. Unity's post effect stack v2 supports both approaches when FXAA is used.

FXAA expects luminance values to lie in the 0–1 range, but this isn't guaranteed when working with HDR colors. Typically, anti-aliasing is done after tonemapping and color grading, which should have gotten rid of most if not all HDR colors. But we don't use those effects in this tutorial, use the clamped color to calculate luminance.

This is a crude approximation of luminance. It's better to appropriately calculate luminance, for which we can use the LinearRgbToLuminance function from UnityCG.

Let's begin by checking out what this monochrome luminance image looks like. As the green color component contributes most to a pixel's luminance, a quick preview can be created by simply using that, discarding the red and blue color data.

FXAA works by selectively reducing the contrast of the image, smoothing out visually obvious jaggies and isolated pixels. Contrast is determined by comparing the light intensity of pixels. The exact colors of pixels doesn't matter, it's their luminance that counts. Effectively, FXAA works on a grayscale image containing only the pixel brightness. This means that hard transitions between different colors won't be smoothed out much when their luminance is similar. Only visually obvious transitions are strongly affected.

Blending High-contrast Pixels

FXAA works by blending high-contrast pixels. This is not a straightforward blurring of the image. First, the local contrast has to be calculated. Second—if there is enough contrast—a blend factor has to be chosen based on the contrast. Third, the local contrast gradient has to be investigated to determine a blend direction. Finally, a blend is performed between the original pixel and one of its neighbors.

Determining Contrast With Adjacent Pixels The local contrast is found by comparing the luminance of the current pixel and the luminance of its neighbors. To make it easy to sample the neighbors, add a SampleLuminance function variant that has offset parameters for the U and V coordinates, in texels. These should be scaled by the texel size and added to uv before sampling. float SampleLuminance (float2 uv) { … } float SampleLuminance (float2 uv, float uOffset, float vOffset) { uv += _MainTex_TexelSize * float2(uOffset, vOffset); return SampleLuminance(uv); } FXAA uses the direct horizontal and vertical neighbors—and the middle pixel itself—to determine the contrast. Because we'll use this luminance data multiple times, let's put it in a LuminanceData structure. We'll use compass directions to refer to the neighbor data, using north for positive V, east for position U, south for negative V, and west for negative U. Sample these pixels and initialize the luminance data in a separate function, and invoke it in ApplyFXAA . NESW cross plus middle pixel. struct LuminanceData { float m, n, e, s, w; }; LuminanceData SampleLuminanceNeighborhood (float2 uv) { LuminanceData l; l.m = SampleLuminance(uv); l.n = SampleLuminance(uv, 0, 1); l.e = SampleLuminance(uv, 1, 0); l.s = SampleLuminance(uv, 0, -1); l.w = SampleLuminance(uv,-1, 0); return l; } float4 ApplyFXAA (float2 uv) { LuminanceData l = SampleLuminanceNeighborhood(uv); return l.m ; } Shouldn't north and south be swapped? I'm using the OpenGL convention that UV coordinates go from left to right and bottom to top. The FXAA algorithm doesn't care about the relative direction though, it just has to be consistent. The local contrast between these pixels is simply the difference between their highest and lowest luminance values. As luminance is defined in the 0–1 range, so is the contrast. We calculate the lowest, highest, and contrast values immediately after sampling the cross. Add them to the structure so we can access them later in ApplyFXAA . The contrast is most important, so let's see what that looks like. struct LuminanceData { float m, n, e, s, w; float highest, lowest, contrast; }; LuminanceData SampleLuminanceNeighborhood (float2 uv) { LuminanceData l; … l.highest = max(max(max(max(l.n, l.e), l.s), l.w), l.m); l.lowest = min(min(min(min(l.n, l.e), l.s), l.w), l.m); l.contrast = l.highest - l.lowest; return l; } float4 ApplyFXAA (float2 uv) { LuminanceData l = SampleLuminanceNeighborhood(uv); return l. contrast ; } Local contrast. The result is like a crude edge-detection filter. Because contrast doesn't care about direction, pixels on both sides of a contrast different end up with the same value. So we get edges that are at least two pixels thick, formed by north–south or east–west pixel pairs.

Skipping Low-contrast Pixels We don't need to bother anti-aliasing those areas. Let's make this configurable via a contrast threshold slider. The original FXAA algorithm has this threshold as well, with the following code documentation: // Trims the algorithm from processing darks. // 0.0833 - upper limit (default, the start of visible unfiltered edges) // 0.0625 - high quality (faster) // 0.0312 - visible limit (slower) Although the documentation mentions that it trims dark areas, it actually trims based on contrast—not luminance—so regardless whether it's bright or dark. We will use the same range as indicated by the documentation, but with the low threshold as default. [Range(0.0312f, 0.0833f)] public float contrastThreshold = 0.0312f; … void OnRenderImage (RenderTexture source, RenderTexture destination) { if (fxaaMaterial == null) { fxaaMaterial = new Material(fxaaShader); fxaaMaterial.hideFlags = HideFlags.HideAndDontSave; } fxaaMaterial.SetFloat("_ContrastThreshold", contrastThreshold); … } Contrast threshold. Inside the shader, simply return after sampling the neighborhood, if the contrast is below the threshold. To make it visually obvious which pixels are skipped, I made them red. float _ContrastThreshold; … float4 ApplyFXAA (float2 uv) { LuminanceData l = SampleLuminanceNeighborhood(uv); if (l.contrast < _ContrastThreshold) { return float4(1, 0, 0, 0); } return l.contrast; } Red pixels are skipped. Besides an absolute contrast threshold, FXAA also has a relative threshold. Here is the code documentation for it: // The minimum amount of local contrast required to apply algorithm. // 0.333 - too little (faster) // 0.250 - low quality // 0.166 - default // 0.125 - high quality // 0.063 - overkill (slower) This sounds like the threshold that we just introduced, but in this case it's based on the maximum luminance of the neighborhood. The brighter the neighborhood, the higher the contrast must be to matter. We'll add a configuration slider for this relative threshold as well, using the indicated range, again with the lowest value as the default. [Range(0.063f, 0.333f)] public float relativeThreshold = 0.063f; … void OnRenderImage (RenderTexture source, RenderTexture destination) { … fxaaMaterial.SetFloat("_ContrastThreshold", contrastThreshold); fxaaMaterial.SetFloat("_RelativeThreshold", relativeThreshold); … } Relative contrast threshold. The threshold is relative because it's scaled by the contrast. Use that instead of the previous threshold to see the difference. This time, I've used green to indicate skipped pixels. float _ContrastThreshold , _RelativeThreshold ; … float4 ApplyFXAA (float2 uv) { LuminanceData l = SampleLuminanceNeighborhood(uv); if (l.contrast < _RelativeThreshold * l.highest ) { return float4( 0, 1 , 0, 0); } return l.contrast; } Green pixels are skipped. Overall, the Contrast Threshold most aggressively skips pixels, but the Relative Threshold can skip higher contrast pixels in brighter regions. For example, in the below screenshot I've combined both colors with both threshold at maximum. Yellow indicates pixels that are skipped using both criteria. In this scene, only some white shadowed regions and the white spheres are affected solely by the relative threshold. Both thresholds, at maximum. To apply both thresholds, simply compare the contrast with the maximum of both. For clarity, put this comparison in a separate function. For now, if a pixel is skipped, simply make it black by returning zero. bool ShouldSkipPixel (LuminanceData l) { float threshold = max(_ContrastThreshold, _RelativeThreshold * l.highest); return l.contrast < threshold; } float4 ApplyFXAA (float2 uv) { LuminanceData l = SampleLuminanceNeighborhood(uv); // if (l.contrast < _RelativeThreshold * l.highest) { // return float4(0, 1, 0, 0); // } if (ShouldSkipPixel(l)) { return 0; } return l.contrast; } Contrast, with skipped pixels at zero.

Calculating Blend Factor Now that we have the contrast value for pixels that we need, we can move on to determining the blend factor. Create a separate function for this, with the luminance data as parameter, and use that to determine the final result. float DeterminePixelBlendFactor (LuminanceData l) { return 0; } float4 ApplyFXAA (float2 uv) { LuminanceData l = SampleLuminanceNeighborhood(uv); if (ShouldSkipPixel(l)) { return 0; } float pixelBlend = DeterminePixelBlendFactor(l); return pixelBlend ; } How much we should blend depends on the contrast between the middle pixel and its entire neighborhood. Although we've used the NEWS cross to determine the local contrast, this isn't a sufficient representation of the neighborhood. We need the four diagonal neighbors for that as well. So add them to the luminance data. We can sample them directly in SampleLuminanceNeighborhood along with the other neighbors, even though we might end up skipping the pixel. The shader compiler takes care of optimizing our code so the extra sampling only happens when needed. Entire neighborhood. struct LuminanceData { float m, n, e, s, w; float ne, nw, se, sw; float highest, lowest, contrast; }; LuminanceData SampleLuminanceNeighborhood (float2 uv) { LuminanceData l; l.m = SampleLuminance(uv); l.n = SampleLuminance(uv, 0, 1); l.e = SampleLuminance(uv, 1, 0); l.s = SampleLuminance(uv, 0, -1); l.w = SampleLuminance(uv, -1, 0); l.ne = SampleLuminance(uv, 1, 1); l.nw = SampleLuminance(uv, -1, 1); l.se = SampleLuminance(uv, 1, -1); l.sw = SampleLuminance(uv, -1, -1); … } Now we can determine the average luminance of all adjacent neighbors. But because the diagonal neighbors are spatially further away from the middle, they should matter less. We factor this into our average by doubling the weights of the NESW neighbors, dividing the total by twelve instead of eight. The result is akin to a tent filter and acts as a low-pass filter. Neighbor weights. float DeterminePixelBlendFactor (LuminanceData l) { float filter = 2 * (l.n + l.e + l.s + l.w); filter += l.ne + l.nw + l.se + l.sw; filter *= 1.0 / 12; return filter ; } Low-pass filter on high-contrast regions. Next, find the contrast between the middle and this average, via their absolute difference. The result has now become a high-pass filter. float DeterminePixelBlendFactor (LuminanceData l) { float filter = 2 * (l.n + l.e + l.s + l.w); filter += l.ne + l.nw + l.se + l.sw; filter *= 1.0 / 12; filter = abs(filter - l.m); return filter; } High-pass filter. Next, the filter is normalized relative to the contrast of the NESW cross, via a division. Clamp the result to a maximum of 1, as we might end up with larger values thanks to the filter covering more pixels than the cross. filter = abs(filter - l.m); filter = saturate(filter / l.contrast); return filter; Normalized filter. The result is a rather harsh transition to use as a blend factor. Use the smoothstep function to smooth it out, then square the result of that to slow it down. Linear vs. squared smoothstep. filter = saturate(filter / l.contrast); float blendFactor = smoothstep(0, 1, filter); return blendFactor * blendFactor ; Blend factor.

Blend Direction Now that we have a blend factor, the next step is to decide which two pixels to blend. FXAA blends the middle pixel with one of its neighbors from the NESW cross. Which of those four pixels is selected depends on the direction of the contrast gradient. In the simplest case, the middle pixel touches either a horizontal or a vertical edge between two contrasting regions. In case of a horizontal edge, it should be either the north or the south neighbor, depending on whether the middle is below or above the edge. Otherwise, it should be either the east or the west neighbor, depending on whether the middle is on the left or right side of the edge. Blend directions. Red represents brightness difference, either darker or lighter. Edges often aren't perfectly horizontal or vertical, but we'll pick the best approximation. To determine that, we compare the horizontal and vertical contrast in the neighborhood. When there is a horizontal edge, there is strong vertical contrast, either above or below the middle. We measure this by adding north and south, subtracting the middle twice, and taking the absolute of that, so `|n+s-2m|`. The same logic is applies to vertical edges, but with east and west instead. This only gives us an indication of the vertical contrast inside the NESW cross. We can improve the quality of our edge orientation detection by including the diagonal neighbors as well. For the horizontal edge, we perform the same calculation for the three pixels one step to the east and the three pixels one step to the west, summing the results. Again, these additional values are further away from the middle, so we halve their relative importance. This leads to the final formula `2|n+s-2m|+|n e+se-2e|+|nw+sw-2w|` for the horizontal edge contrast, and similar for the vertical edge contrast. We don't need to normalize the results because we only care about which one is larger and they both use the same scale. If the horizontal edge contrast is greater or equal than the vertical one, then we have a horizontal edge. Create a struct to hold this edge data and put the calculation for it in a separate function. Then have ApplyFXAA invoke it. This allows us to visualize the detected edge direction, for example by making horizontal edges red. struct EdgeData { bool isHorizontal; }; EdgeData DetermineEdge (LuminanceData l) { EdgeData e; float horizontal = abs(l.n + l.s - 2 * l.m) * 2 + abs(l.ne + l.se - 2 * l.e) + abs(l.nw + l.sw - 2 * l.w); float vertical = abs(l.e + l.w - 2 * l.m) * 2 + abs(l.ne + l.nw - 2 * l.n) + abs(l.se + l.sw - 2 * l.s); e.isHorizontal = horizontal >= vertical; return e; } float4 ApplyFXAA (float2 uv) { LuminanceData l = SampleLuminanceNeighborhood(uv); if (ShouldSkipPixel(l)) { return 0; } float pixelBlend = DeterminePixelBlendFactor(l); EdgeData e = DetermineEdge(l); return e.isHorizontal ? float4(1, 0, 0, 0) : 1 ; } Red pixels are on horizontal edges. Knowing the edge orientation tells us in what dimension we have to blend. If it's horizontal, then we'll blend vertically across the edge. How far it is to the next pixel in UV space depends on the texel size, and that depends on the blend direction. So let's add this step size to the edge data as well. struct EdgeData { bool isHorizontal; float pixelStep; }; EdgeData DetermineEdge (LuminanceData l) { … e.isHorizontal = horizontal >= vertical; e.pixelStep = e.isHorizontal ? _MainTex_TexelSize.y : _MainTex_TexelSize.x; return e; } Next, we have to determine whether we should blend in the positive or negative direction. We do this by comparing the contrast—the luminance gradient—on either side of the middle in the appropriate dimension. If we have a horizontal edge, then north is the positive neighbor and south is the negative one. If we have a vertical edge instead, then east is the positive neighbor and west is the negative one. float pLuminance = e.isHorizontal ? l.n : l.e; float nLuminance = e.isHorizontal ? l.s : l.w; e.pixelStep = e.isHorizontal ? _MainTex_TexelSize.y : _MainTex_TexelSize.x; Compare the gradients. If the positive side has the highest contrast, then we can use the appropriate texel size unchanged. Otherwise, we have to step in the opposite direction, so we have to negate it. float pLuminance = e.isHorizontal ? l.n : l.e; float nLuminance = e.isHorizontal ? l.s : l.w; float pGradient = abs(pLuminance - l.m); float nGradient = abs(nLuminance - l.m); e.pixelStep = e.isHorizontal ? _MainTex_TexelSize.y : _MainTex_TexelSize.x; if (pGradient < nGradient) { e.pixelStep = -e.pixelStep; } To visualize this, I made all pixels with a negative step red. Because pixels should blend across the edge, this means that all pixels on the right or top side of edges become red. float4 ApplyFXAA (float2 uv) { … return e.pixelStep < 0 ? float4(1, 0, 0, 0) : 1; } Red pixels blend in negative direction.