Compressed textures in WebGL are coming soon. Chrome 19 (currently in beta as of Apr 23, 2012) already supports them. Initially only formats in the DXTn / S3TC family are supported, hopefully more will be supported in the future. Even with just DXT support this is great news b/c it means WebGL apps can use less GPU memory for textures, albeit at some cost to texture quality.

But how to get textures in DXT format? Traditionally in games the textures are often pre-generated in DXT. This works fine in many cases, but for web apps may not be optimal, especially if the web app needs to load many such textures over a potentially slow internet connection. This is because DXT is designed to be fast to decode in hardware but is not as compressed as it could be. DXT1 for example takes 4 bits per pixel ("4bpp") while JPG, a very common image format on the web, can achieve lower bpp at equivalent quality. When optimizing for speed of loading images and textures, every bit counts. Ideally there would be a way to send a highly compressed texture over the wire and then quickly transcode it into DXT for use with WebGL.

One way to do this is to send the texture as JPG over the wire and then transcode it into DXT in Javascript. Unfortunately this turns out to be slow: ~16ms for a 256x256 texture.

Enter Crunch, and image format and encoder/decoder that is designed to achieve bit rates similar to JPG for similar quality but is super fast to transcode into DXT. Sounds perfect, right? The only problem is that the decoder is in C++, and web apps are written in Javascript. Writing a Crunch decoder in Javascript would certainly be possible, but if we could re-use the C++ decoder that would save some work.

Enter Emscripten, a compiler that compiles C/C++ into Javascript. Sounds crazy right? But it works. And the code it generates is fast. Not as fast as native, but in my experience so far it is "only" a factor of ~5x slower than native. I used it to compile the Crunch decoder into Javascript.

The Crunch decoder mostly lives in one file (a copy of which I've hosted here), crn_decomp.h. I had to write a small wrapper for it, crn.cpp. And then I used emscripten to compile it into crn-O1.js. You can see that it basically looks like assembly in javascript - the heap and stack are just one big array, "pointers" are just number indexes into this array, etc. Compiling with the -O2 flag generates an optimized version crn-O2.js.

The decode_test.html puts it all together. It loads crn-O2.js as a regular script, fetches a test .CRN file, calls functions from the generated crn-O2.js file to decode the .CRN file, uploads the resulting DXT compressed texture to WebGL, displays it using some code in renderer.js, and keeps some timing stats for display in a little table. The results are pretty impressive (to me, at least): I see ~50 megatexels/second on my setup (Chrome 19 on a nice linux box). For comparison, at this speed a 256x256 image would take a couple milliseconds to transcode, which is much better than the 16ms for transcoding JPG into DXT.

UPDATE: I added some code in dxt-to-rgb565.js to transcode from DXT into RGB565 so the texture can be displayed even if there is no DXT support. This transcoding would also be useful in real world applications that only want to serve one copy of the data. The transcoding from DXT into RGB565 seems to be about 5x as fast as the decode from CRN to DXT. Optimizing it was fun :)

Thanks to Rich Geldreich for developing Crunch and helping me with this little project. He was very responsive whenever I had questions, he provided the test .CRN files used here, and he ported the encoder to linux.

Also thanks to Colt McAnlis who first told me about Crunch and connected me with Rich.

Thanks to Travis Heppe for helping with the C++ code in crn.cpp (I am quite rusty at C++!).

Follow me on Google+