Haskell doesn’t sacrifice speed for power, or abstraction for control

Haskell is an amazing language for this type of program for a couple of reasons. It’s static typing eliminates an entire class of errors before your program will even compile. It is garbage collected, so tricky memory-management code is non-existent, it has a fantastic foreign function interface allowing it to wrap and call any C code ever written (and vice versa), and it compiles to native libraries and executables, making it not only fast, but a legitimate candidate to run on platforms that don’t yet have or won’t allow virtual machine code interpreters (iPhones, Pres and their ilk).

So where do we start learning how to write OpenGL in Haskell?

The first thing to realize is that there are a million OpenGL tutorials in C and in comparison, far fewer in Haskell. That’s ok, because Haskell’s bindings to OpenGL are low-level enough that you can actually use C examples to guide your Haskell code. For example, I found this OpenGL example demonstrating how to use GLUT to make drawing a cube super-easy. To make the executable, I had to download a ton of libraries, but they were all available via apt in Ubuntu and I could finally write this Makefile to make compiling and linking a one-command affair:

cube: cube.o gcc -o cube cube.o -lglut

Running the resultant cube executable, produces a pretty picture:

The rest is getting familiar with the Haskell OpenGL and GLUT libraries. The functions reside in the Graphics.Rendering.OpenGL and Graphics.UI.GLUT modules. You can find their documentation here and here.

I wrote my first version in a manner that I thought most closely resembled the C syntax. The n , faces and v functions in the Haskell version are standins for the arrays the C version uses.

n :: [Normal3 GLfloat] n = [(Normal3 (-1.0) 0.0 0.0), (Normal3 0.0 1.0 0.0), (Normal3 1.0 0.0 0.0), (Normal3 0.0 (-1.0) 0.0), (Normal3 0.0 0.0 1.0), (Normal3 0.0 0.0 (-1.0))] faces :: [[Vertex3 GLfloat]] faces = [[(v 0), (v 1), (v 2), (v 3)], [(v 3), (v 2), (v 6), (v 7)], [(v 7), (v 6), (v 5), (v 4)], [(v 4), (v 5), (v 1), (v 0)], [(v 5), (v 6), (v 2), (v 1)], [(v 7), (v 4), (v 0), (v 3)]] v :: Int -> Vertex3 GLfloat v x = Vertex3 v0 v1 v2 where v0 | x == 0 || x == 1 || x == 2 || x == 3 = -1 | x == 4 || x == 5 || x == 6 || x == 7 = 1 v1 | x == 0 || x == 1 || x == 4 || x == 5 = -1 | x == 2 || x == 3 || x == 6 || x == 7 = 1 v2 | x == 0 || x == 3 || x == 4 || x == 7 = 1 | x == 1 || x == 2 || x == 5 || x == 6 = -1

And here’s the C code:

GLfloat light_diffuse[] = {1.0, 0.0, 0.0, 1.0}; /* Red diffuse light. */ GLfloat light_position[] = {1.0, 1.0, 1.0, 0.0}; /* Infinite light location. */ GLfloat n[6][3] = { /* Normals for the 6 faces of a cube. */ {-1.0, 0.0, 0.0}, {0.0, 1.0, 0.0}, {1.0, 0.0, 0.0}, {0.0, -1.0, 0.0}, {0.0, 0.0, 1.0}, {0.0, 0.0, -1.0} }; GLint faces[6][4] = { /* Vertex indices for the 6 faces of a cube. */ {0, 1, 2, 3}, {3, 2, 6, 7}, {7, 6, 5, 4}, {4, 5, 1, 0}, {5, 6, 2, 1}, {7, 4, 0, 3} }; GLfloat v[8][3]; /* Will be filled in with X,Y,Z vertexes. */ /* Setup cube vertex data. */ v[0][0] = v[1][0] = v[2][0] = v[3][0] = -1; v[4][0] = v[5][0] = v[6][0] = v[7][0] = 1; v[0][1] = v[1][1] = v[4][1] = v[5][1] = -1; v[2][1] = v[3][1] = v[6][1] = v[7][1] = 1; v[0][2] = v[3][2] = v[4][2] = v[7][2] = 1; v[1][2] = v[2][2] = v[5][2] = v[6][2] = -1;

Also, the drawBox function was interesting to write, because Haskell has no for loop, so I had to rethink what was going on and translate the idea of executing a block of code over a list of data into it’s functional equivalent.

drawBox :: IO () drawBox = let nfaces = zip n faces in do mapM (\(n, [v0, v1, v2, v3]) -> do renderPrimitive Quads $ do normal n vertex v0 vertex v1 vertex v2 vertex v3) nfaces return ()

The Haskell OpenGL bindings have no glBegin / glEnd functions, but rather, renderPrimitive , which takes a PrimitiveMode and a function/block of vertex-related actions.

Beyond that, the only other thing that tripped me up was that I couldn’t figure out how to enable depth testing. I passed the option to display mode to use a depth buffer:

initialDisplayMode $= [DoubleBuffered, RGBMode, WithDepthBuffer]

and in C, there’s a single call to enable it:

glEnable(GL_DEPTH_TEST);

which I couldn’t find anywhere in the Haskell API. I figured it would look like the call to enable lighting:

lighting $= Enabled

so I ended up choosing depthMask $= Enabled , but my first clue that things weren’t working was what my program displayed:

More of a box than a cube, really.

The key, I found, was that the C version makes a call to enable depth buffering, but omits a call to set the actual depth test the depth buffer uses, relying on default behavior. The Haskell API only provides the second, depth test-setting operation.

depthFunc $= Just Lequal

in place of my misguided depthMask , that code did the trick.

You can find my finished Haskell source in this gist right above the original C program I translated. If you’re interested in running it, runhaskell Cube.hs should do the trick.

This is by no means the best looking Haskell code. It could be more idiomatic, but I wrote it this way to show how similar it looked to the C code it came from. It ended up still being 20 lines shorter and allows you to focus on more important ideas than properly updating index counters or setting integer bit-flags properly.