In my previous post, I briefly explained how to use Physically Based Rendering in iOS10 using SceneKit. As you probably noticed, the results are quite amazing. In this post, we are going to make it even better with some magic called: Post-Processing Effects.

In this short article, I will try to briefly explain what are post-processing effects, how do they work, and how you can use them in iOS 10 to get great looking results without the complexity of any advanced graphics engine.

I will go ahead and give an honest disclaimer. This article oversimplifies some concepts to give you an intuition about key elements in post-processing. I really recommend you to read more about the topic if you find it interesting.

Post-Processing 101

Post-processing effects are a series of image filters that are applied on your rendering result. Sounds complicated ? It’s really not. The diagram below illustrates how it works in a very simplified manner.

First, your scene is rendered. Then, post-processing effects are applied one after the other. Each step, also called a Render Pass, takes an image as an input and produce an image as an output (Hence the notion of filters). The last post-processing step is somewhat special as it will draw directly to your screen (which if you think about it, is just like outputting an image).

Post-Processing Effects flow. Each step take an image as the input and produce an image as output | The last effect is just drawing to your screen

As the effects are image filters, they are applied to pixels and not on your 3D scene. It is great if you currently associate this to Instagram-like applications. Why ? Because that is exactly what they do! Instagram-like applications are just applying image filters on a single image to produce the effects we all love to share.

If you will look carefully at the diagram, you will notice that order is important. Luckily for you, SceneKit already maintains the optimal order to get your results nice and crisp. Some graphical engines such as three.js allow you to customize your post-processing order, however, to achieve great results you will need to dig a bit deeper about what each effect does.

Let’s take a very basic example: Black-and-White effect.

Our app first starts by rendering our scene, which produces a colorful image. Then we iterate on every pixel and convert it to its grayscale equivalent. Last, we draw it on screen.

Iterating on each pixel in the image to convert an entire image to grayscale | Note that without any special optimizations this code will not execute in parallel even on multi-core CPU’s

If you want to learn the math behind these color conversions, I strongly suggest learning about the different color spaces such as RGB, HSV, and LAB. Their Wikipedia pages are quite good :)

It is important to remember that these filters are applied on pixels because:

(1) Compute time complexity grows with your screen size

(2) Artifacts, such as aliasing, can and will appear.

I will not go into detail regarding aliasing as the topic requires an article by itself. However, a cool thing to remember is that for most real-time applications, artifacts created by post-processing effects, are usually solved by applying another post-processing effect. If you would like to dive deeper in solutions to this problem start by reading about Fast Approximate Anti-aliasing (FXAA).

Improving Compute Time Using Your GPU

When we work with images, we should be very conscious about how we process them. An iPad Pro 13" native resolution is 2732 x 2048 = 5,595,136 pixels (!). If we are to execute these operations on the CPU, at a frame rate of 60 Frames Per Second (FPS), we need to make more than 3 billion calculations per second and will probably hit some performance issues causing many of our users to throw away our application.

Let’s take a look again at the pseudo-code snippet that turned our colorful image to grayscale (Black-And-White effect):

for pixel in image {

pixel = pixel.convertToGrayscale();

}

Because each pixel is independent of the rest of the image, the entire for loop can be parallelized (!), and for that task, we turn to our good friend — our GPU — using Shaders.

Shaders are code blocks that can be executed in parallel efficiently on our GPU. There are many awesome things you can do with shaders that I will not go into detail in this article, however, I will try to give you some intuition about how they work for image processing.

There are two main types of Shaders: Vertex shaders and Fragment shaders. A Vertex shader takes as an input a single vertex, that is a vector, and process it. A Fragment shader takes as an input a single pixel and process it. To perform the Black-and-White effect we will write our convertToGrayscale function as a Fragment shader (color pixel -> grayscale pixel) and ask it to draw on our screen. Based on the specification of our GPU, we will be able to apply the effect in greater performance, up to a real-time performance.

Processing pixels on the GPU | Each pixel processing is done in parallel on the GPU | With modern GPU’s such as the one on the A9X chip we can reach great results

The great thing about using SceneKit is that it does all this hard work for you efficiently using Metal shaders (!) so if you are only interested in its off-the-shelf effects, you won’t need to do any shader code.