Edit : this page was featured on engadget! Thanks for the love!

(THIS PAGE WON’T LOOK GOOD ON MOBILE, make sure Vimeo links are playing in 720p)

The way a computer monitor displays colours is to mix various intensities of red, green and blue light. A typical monitor can display 256 levels of intensity of each of the three colours, giving 256x256x256 ~ 16 million different colours, which can be expressed in 24 bits of information. The following looping animation contains every 24-bit colour exactly once in one of its pixels in one of it’s frames (at least the uncompressed mp4 I uploaded to vimeo did). It has 256 frames, 256×256 pixels apiece.

It’s a bit small, so most of the animations that follow are actually 256 frames of 512×512 (every colour appears 4 times). The following uses the same algorithm as the first one, just bigger.

There are animations that look quite different to this below.

Code that generated these is here.

Background

About 18 months ago, there was a challenge on codegolf to produce images using every 8-bit colour for exactly one pixel. I wrote a scala implementation of the winning post, ‘rainbow smoke’ due to fejesjoco. It produces images where pixels’ colours are generally very similar to their neighbours, producing a nice painting-like effect, dubbed ‘rainbow smoke’ by it’s creator.

Recently I found my scala code on github, and a thought occured – instead of colouring a 2d array of pixels, why not try colouring a 3d array? Since a video is essentially just a 3d array of pixels (or voxels), with one of the dimensions being time – this would produce a video where every colour appears exactly once, and hopefully the video would change smoothly over time, in the same way that the 2d version changes smoothly in space. I tried to colour a 256x256x256 cube, which produces a video 256 frames long which are 256×256 side. Here was my very first attempt :

Pretty fuzzy in retrospect, and I quickly decided that 256×256 was a bit small. I rewrote my code in julia so that it could handle 512x512x256 images, and started playing around with a bunch of parameters.

I found that improving the way the algorithm searches for available neighbours (the ‘rgb-cubes’ detailed in the code section) produced much more structured results.

In my many trial and errors, I sometimes stumbled upon pretty weird ones.

My favourite kind to date, and the one that most reminds me of the original ‘rainbow smoke’ oil-painting effect came out with some settings, when there was just one seed point. Here is the last animation in this post.

Algorithm

First we explain the algorithm in vague terms. As mentioned before, this is not original, but due to joco. We form the list of all RGB colours (possibly duplicated). Next we reorder them in a manner of our choosing. We create an empty cube of voxels, pick n voxels at random and mark them as ‘available’. Then we iterate through the colour list, find the best voxel from our list of availables, then colour the voxel as that colour. Finally we delete the voxel from the list of availables, and add all of it’s non-coloured neighbours to the available set. To create a loop, we wrap the two ends which correspond to the first and last frames, so that these two frames are ‘neighbours’.

I have been intentionally vague about a lot of things here. What ordering do we take the colours? What constitutes a neighbouring cube? What do we mean by find the best available free voxel? These are all changeable things, and the animations in these posts were essentially found by mixing and matching different parameters.

I won’t go through everything, but a few observations – it is better to take ‘neighbouring’ cube to mean directly adjacent, rather than the full set of 26 cubes around it. The way the ‘best’ empty voxel for a colour is defined, has the most profound effect on the way things look. My favourite animations come from using the mean of each of the r,g, and b values of the voxel’s neighbours, and then look at the max of the differences between each of these and the r,g,b of the candidate colour (kind of like an norm).

Code

This was a fun exercise, somewhat more challenging than the RGB images, since you need to build up a rather large cube to have anything worth looking at. While I have made my code available, I would encourage the reader to come up with their own solution, or at the very least their own code (a lot of the solutions on the code golf thread look like they have easy 3d analogues). Mine can build a 512 x 512 x 256 ‘cube’ in a couple of hours, depending on the parameters passed to it, in theory it should be doable in minutes given a better lookup strategy.

As for my code, the reason it is so monolithic (and pretty messy) is mainly optimisations. We cannot realistically check every available voxel in the cube, since this grows extremely large (we can have millions of available spaces). Ideally one would like to put these into some sort of space partitioning structure such as a k-d tree, however I didn’t fancy coding one that would support quick deletion/updates in julia (the current solution seems to have memory problems which I can only assume would be exacerbated with a more complicated structure). First I tried just randomly sampling the available voxels (the first few animations in the background section do this), but then I tried dividing RGB-space (which is 256x256x256) into smaller RGB-cubes (default 16 x 16 x 16), and at all times make sure each available voxel is mapped to its most similar RGB-cube based on the ‘average’ of the coloured cubes around it.

Given a new colour, we find which RGB-cube it lies in, and look only at those voxels mapped to it to find the best. We subsample from these voxels to help with the runtime (a little displeasing, but seems to work fine). Once we have coloured a voxel, we then need to update all the neighbouring uncoloured voxels, in theory unmapping them from one RGB-cube and mapping them to a different one.

There seem to be memory problems constantly inserting and deleting from large julia sets, so instead I defined my own structure, lamely called a ‘delete array’. This stores an array of available voxels, and a set of voxels that are marked for deletion, and only deletes once the deleted voxels set is large. You will see if you run the code on a large cube, that it starts very fast, then every so often get snagged for a long time – I believe this is a due to garbage collection, but I have found it difficult pin down.