Chapter 5 in OpenGL 4.0 Shading Language Cookbook delves into the world of post-processing. For the uninitiated, post-processing in the game development world refers to processing that gets done to a frame after it has been fully rendered. Post processing enables you to add many impressive looking effects to a rendering engine in a relatively resource efficient way. I will say that I don’t think any of my examples here are beautiful examples of what post-processing can do, but they definitely get the point across. Over the course of this chapter I had to implement a robust system to handle post-processing in a modular manner without breaking the rest of the features of the engine. After that, it had me implement several post-processing shaders including an edge detector, Gaussian blur effect, bloom and a couple others.

As always, the Derydoca Engine source code is available on GitHub. If you want to follow along, this commit hash is a0b85c38d653360ac91147db3fc8b050d88e4c9c.

Edge Detection

The first effect described in the book is an edge detection filter. This shader is actually relatively straightforward. After the frame has been rendered, the shader traverses each pixel and compares the eight pixels that surround it in order to determine a scalar value that describes how much this pixel’s luminous value varies from its neighbors. The final color from the shader is then decided to be white if it is greater than a certain threshold, otherwise it is colored black.

I toyed around with this shader to let all of the black pixels actually let the previous color buffer’s frame data through to create a simple cell shaded effect. You can also easily tweak the color of the lines and background as well as the threshold in order to reduce or increase the amount of lines drawn.

Gaussian Blur

Image blurring is a common image editing technique and the Gaussian blur method is one of the most popular ones to use. The book implements a variant on the Gaussian blur where, instead of blurring the color of all pixels within a radius, it actually blurs in the x direction and then y direction. You might wonder why we would even do that? Put simply, it is computationally expensive to evaluate all pixels within a radius of a point. A simple way to approximate the normal Gaussian blur is to blur on the x-axis and then on the y-axis. This incurs two extra passes because of that, but ultimately it is still faster to process than traditional Gaussian blur.

Bloom

Bloom is an effect that can easily be overdone in games. Just look at games in the last console generation if you doubt me! However, when done well, this effect can bring the believability of your scene up quite a bit.

Bloom bleeds light from overexposed areas of a render to the other areas. This particular implementation of bloom takes a total of three passes. The first pass renders the scene in black and white to use as a mask for the blurring stages. Any part of the image with a luminosity greater than the threshold will be colored white (un-masked) and the rest of the image will be black. From there, we blur in the x and y direction much like we did in the Gaussian blur shader, but this time only considering the pixels in the masked areas of the first pass. My screenshot shows off this feature a bit much, but it definitely communicates what the shader is doing.

Gamma Correction

If I had to rate all of these shaders, this shader would earn “least sexiest award.” Not because it doesn’t look good, but because it is often overlooked in the game development world.

Gamma correction is a technique that is used to adjust an image so that it accounts for color distortion when being displayed through your monitor. The book goes into good detail color reproduction so I will not cover it here. However, I included a screenshot below that illustrates the difference in visual quality.

On the left is the squirrel in the scene, and the right area contains a render texture with the gamma correction shader applied. As you can tell, the pure-white backdrop is the same shade of white, but the gold color of the tail on the left is turned more matte brown on the right. The color correction does less work on fully saturated and fully desaturated colors than it does on ones in between.

Deferred Rendering

The book saved the most complicated “effect” for last. Up until now, we have been doing all of our calculations for lighting when we render each object. It is the most straightforward way to draw an object to the screen with lighting. However, deferred rendering flips this around a bit. When drawing your objects, you actually write to what is called a graphics buffer (or g-buffer for short). This buffer contains N number of textures that have all the data necessary to light your scene. This example only renders simple diffuse shading, so we need the color of the unlit pixel, normal at that point, and position at that point. The normal and position channels store their data in camera-space instead of world space.

You can take a look at the color, normal, and position channel buffers in the three thumbnails below. I was able to pull these images out of the g-buffer using a debug tool called RenderDoc which allows you to dig into the internals of the graphics card to debug any issues you have with your shaders. As a side node, the backgrounds are colored magenta here incorrectly. This is because I was lazy and didn’t update the buffer clearing logic to account for this.

After all of these channels have been created, the final pass iterates through each pixel and uses the data contained in them to calculate the diffuse lighting in the scene. Of course light information is not represented here. That is still supplied to the shader like we did before because it would not make sense to store that data in a texture.