Hi there, time for some updates on my voxel rendering pipeline! Lot of work behind the scenes last month, and now taking some time to pause and reflect on where the interesting research opportunities are.

The lowdown for people just joining in: I’m building a Sparse Voxel Octree raycaster, so I’m shooting rays, and not rendering voxels as graphics primitives, which is what games like the rather excellent Minecraft do.

Video

Voxelizer

In order to voxelize bigger voxel meshes, I had to modify my voxelizing method so it works out-of-core. That means you can specify the amount of memory the voxelizer is allowed to use. By exploiting the morton order / z-curve and partitioning the original voxel mesh, we can write out the voxel data in the correct order in a linear fashion.

Also, in order to better capture the geometry of the input polygon meshes, I first perform additional loop subdivision on the mesh, so extra faces (and their normals) are generated to be captured in the voxelizing process.

In addition, I’ve removed some of the OS-specific calls to be able to cross-compile on x64 Linux, which allows me to run my voxelizer on some of our beefier (> 32 Gb RAM) Linux servers.

Of course, allowing the voxelizer to use more memory increases the efficiency of the algorithm. More memory means less iterations of the original mesh faces. There’s still room for more optimization here, but having the partitioned scheme in place was already a huge improvemend, and is sufficient for now.

Renderer (Voxel Ray Caster)

When working with bigger voxel sets (4096 x 4096 x 4096), it became apparent that re-building the octree for every program run was getting a bit tedious, so I wrote an exporter which writes it to a binary blob cache, and reads it at run-time. Also, some optimizations to octree node storage resulted in a smaller memory footprint.

The generation of non-leaf voxel levels is now a simple, bottom-up, linear average, which works okay for simple color i nfo, but produces artifacts for normals, as can be expected. A good representation for normals on a higher level is not simply an average of the normals in the lower part of the octree. Experimenting with more advanced subpixel filtering models will be the key to interesting research.

I’ve also written some more rendering modes, including one which allows me to switch octree levels in real-time, demonstrated in this month’s progress video. Another, more “fun” type of renderer, picks lower levels of the octree depending on screen Y position, resulting in this: