Sometimes I start to write, or at least start to think about writing, computer games. I find the engine writing attractive. I’m sitting on a few great game ideas; oh to have the financial freedom to go be an indie game maker!

As I start to write the game I find my mind inexplicably drawn by technical details.

One technical detail that I’m inexplicably drawn to each time is how to compress the artwork. I reckon that games could be compressed 3x better:

Glest is a great open source game with a vibrant and active modding community. Its a good example that I’ve studied.

A Glest mod may be anywhere between 20 and 200MB, as zipped by 7-zip. That’s a lot of bytes to download.

About 40% of that is the 3D model mesh format (Glest has its own called G3D) and the other 60% is the textures to go on those models. Some mods have non-trivial quanities of OGG music, which I’ll ignore for the rest of this article. (And the configuration files (Glest uses XML) really don’t count for anything bytewise.)

The G3D model format is keyframe-based vertex animation. Its just like id’s MD2/MD3, which they used in Quake II.

Each model is made from meshes. Each mesh has a number of frames, and for each frame it stores the vertices and normals and texture coordinates. It stores an index array describing the triangles to make from these verices too; it stores only one copy of this, which is used for all frames.

The absolute first step is to move to a bones animation system instead. In a bones animation you have a list of vertices and and index describing how to join vertices to make triangles. For each vertex you attach to one or a handful of bones by different weights. You can compute the position of each vertex for any given time point by moving the bones. id moved to bones for Doom III (MD5 format).

Bone animations take substantially less space. its as simple as that. You only have to store the vertices for one ‘frame’. They are also naturally how most authoring tools work, so its perhaps less of a burden on the creator.

Key-frame based vertex interpolation is perhaps slightly faster to draw, but at the cost of massively more memory both on-disk and in-game.

Now you’ve got just one list of vertex positions and normals, you can dramatically cut this down further.

You can optimise the mesh to in-game size. I’ve seen 3D models in Glest that you could render as a point cloud! There’s no need for detail that isn’t discernible or necessary when drawn in-game. But this is lossy; I’d rather steer away from making judgements on artwork; that’s for the artist. If they give us a ridiculously high-poly model, we have to take it.

Many models contain massive amounts of symmetry. You can note that the left side of the torso mesh is the same as the right but mirrored, and just store the left side and instructions to mirror it to create the right side. This is not, of course, universally true. But the vast majority of Glest-style models could be halved. You can do this on a sub-mesh basis.

Hidden surface removal often relies on knowing about the constraints of the camera in the destination game. In glest, for example, the undersides of models will never see the light of day.

To the right here is a model with hidden surfaces (green) as found by a Python script I wrote to test this ->

You can usually save space by moving to triangle strips instead of triangles. There are tools to do this from Nvidia, ATI and others; I have played with the excellent stripe. This will more than half your file without affecting your ability to do further compression.

Many models use auto-normals. Or the auto-normals are close enough in most cases. If you compute the auto-normals and store the diff to the actual normal, rounded to some precision (MD5, for example, only stores normals to a precision of a couple of decimal points), then you can often get lots of 0’s and such which will compress nicely when we compress the tiny mesh file we now have.

We’ve moved to bones animation, we’ve zapped hidden surfaces, we’re only storing one half of mirrored meshes and we’re using delta-to-auto normals. And now we compress this output file. It doesn’t matter if the file is a binary format or a text format; they compress about the same if you’re using normal compression e.g. zip/7-zip/LZO etc.

However, if you move on to compression by prediction, meshes are rather interesting. If all the points in a mesh were truely random, the model would be a spikey ball. Real models have rather constrained vertex positions and this could be modeled. However, as long as we stay away from compression by prediction we have great runtime performance; as soon as we go this route we’re likely to blow any runtime loading budget. It might be interesting to download and decompress and store the decompressed meshes locally perhaps instead? I’ve read the papers on predicting vertex positions that I’ve found on Google but I haven’t tried implementing anything so I can’t be as sure how much it helps relative to how much it slows. Also, it occurs to me that vertices are most likely to be attached to bones that are near them and their neighbours, and I haven’t seen any papers discussing modeling that.

Compressing compressed textures

Update: excellent expert analysis

I’ve mostly concentrated on compressing meshes so my ideas on compressing the textures - and even modeling that compression on the mesh that we know they fit - has not really been experimented with. So I’ll list my ideas a bit more briefly and with less conviction:

Often, especially where we have corresponding mirroring in the mesh, we have corresponding mirroring in the texture maps. (Mirroring of texture coordinates for meshes kind of relies on this,) Looking at the texture map for a model is often full of striking symmetry. You could simply store half and note how and where to mirror, or you could go further and rewrite the texture coordinates in the model to re-use the exact same texels but mirrored.

Sometimes textures are lossy compressed but that really has to be an artist choice again; you can’t go JPGing texture maps (you can have grayscale JPGs for alpha channel) automatically really. Texture compression e.g. DDS/PVRTC/ATITC etc is again an artist choice.

It is exciting to wonder if iz compression can be applied to compressed textures; it is said to be comparable to PNG on ratio, but at least 2x faster to decode i.e. load!

Textures are often pre-mip-mapped and when you have a mip-mapped image you one of the levels to compress the others.

Texture, light maps, normal maps and such are often the exact same layout as each other. If you treat them as a whole and use one to model another you can likely get much better compression ratio than treating each as an isolated individual.

So, in this way, I think you can use less than a third as much bandwidth and disk space to store the exact same 3D game models.

"share"