iOS devices don’t have any kind of 3D option built in, which I’m glad of because it is a pointless gimmick that gives me headaches. Having said that, I quite like the ‘retro’ stereoscopic 3D that could be achieved with red-cyan glasses.

The premise of 3D images is really simple: you have two different images taken it two different positions that are roughly eye distance apart and you then have to find a way of making sure each image only gets into one eye.

About three months ago I was learning the basics of OpenGL ES and I created a dumb Minecraft style world:

I actually got it pretty well optimised and it can run happily at about 60fps. I then began to look at the code this morning and figured out that stereoscopic rendering wouldn’t be that hard to add to it.

The method is actually really simple:

Create two offscreen textures and associated frame buffers* that are the same dimensions as your view.

On each frame create two view matrices from your original view matrix with each one shifted a little (I went for about 5mm on an iPad screen)

Render the separate view matrices twice into the associated offscreen textures

Present both textures blended together with red and blue

The end result (this is the same view as above) looks a bit like this:

There are a number of disadvantages to this technique. The first is that you can’t go to really high resolutions. I got this running at 60fps on a retina iPad, but I had to render at 1024 * 768 (rather than native 2048 * 1536) without anti-aliasing. The second is that you lose a lot of color information; I had to grayscale my images and even then they appeared quite dull compared to the original image. The third is that this doesn’t scale well across devices because of the distance between the left/red camera and the right/blue camera. I added a pinch to change distance feature so that I could compare between the iOS simulator and real devices.

Despite this, it is actually quite a cool technique although I don’t think that it will become particularly mainstream yet.

Update: After some discussion on Reddit I’ve now updated the source a little so that the blending is a little different: the final pixel is made up of the red component of the left pixel and the green and blue components of the right pixel. This maintains color and produces brighter images:

As per request, I’ve also stuck the code on GitHub. It is a little verbose at the moment, but it’s still readable.

*Oh yeah, no GLKit here!