You can play De daemonici corporis fabrica on my itch.io page while I cook up the Vive version.

When I enter game jams, I go full crunch mode, which usually ends up in very awkward sleep-deprived Mondays at work. The older I get, the harder it is for me to get hyped enough to do one. Fortunately, PROCJAM is super chill, which makes it perfect for me. Also because cheating is totally okay, which means I can do stuff like necroing my organ generator that I worked on for Bestiarium a few months ago and actually building a prototype around it.

Even though I went through a long time without touching it, Bestiarium has an impressive amount of ideas floating around in my head, but there are a few “philosophical” points where it stands upon. It was always supposed to be a “one room roguelike”, so inevitably the exploration space will be less towards the “outside” and more towards the “inside”, which makes that William Blake poem fit like a glove:

To see a World in a Grain of Sand

And a Heaven in a Wild Flower,

Hold Infinity in the palm of your hand

And Eternity in an hour.

This sort of thing, to me, ends up being more fun when it’s taken on a literal direction. So I guess it’s more towards the “insides”.

One of the main mechanics of the game would involve extracting materials from the creatures you can summon. I could do the regular thing and just drop “rabit’s foot” and “spider eyeball” from them but it’s also a game about science, and how science sometimes about trial and error (and most of the times, hard work and elbow grease).

So here was the pitch of a small bullet point marked with “probably wouldn’t work” on my design notes: you have to dissect the creatures to harvest elements from them. PROCJAM was obviously a perfect excuse for actually wrapping a prototype around that and test out.

De daemonici corporis fabrica (which is verbatim the demonic version of De humani corporis fabrica) tries to emulate what ancient scholars could have felt when they opened up a corpse for the first time, which was probably some variation of “what the hell is this lumpy, wrinkly thing for exactly?“. But I digress. This post is about the tech side, and it all started with… well, how the hell can we generate procedural organs?

It all started many months ago with a night filled with watching animal dissection videos (most research work for this thing reminded me of this Gamasutra article). I could of course just go towards morph targets in a bunch of pre-made organ models. But I had just added a Markov Chain-based name generator to the Invocation Prototype, and that got me thinking: could I maybe use it to generate organs that are more similar to “real” ones?

For markov chains, we need some starting samples:

Yes, incredible 8×8 pixel templates that kind of resemble organs if you squint your eyes. The first thing I did was simply encoding black and white into 0s and 1s in a string, then feeding that into the Markov chain – something that I was pretty sure wouldn’t work, but I wanted to try a step by step approach.

Guess what? Yeah, I was right, it just generated glitchy-looking textures, because images require 2-dimensional consistency. So the next step was encoding the image in a group of integer 2d positions for all the white pixels. That actually generated some interesting results that were plausible variations of the input.

MOAR, I thought, increasing the samples to 16×16. Turns out 8×8 is the sweet spot, and I didn’t look much into ways of improving the resolution (or if it makes sense to do so).

Ok, we’ve got some organ-y shapes, but they’re lo-res and 2d. How to take this up a dimension? The first thing that comes to mind was using some form of metaballs.

I ended up simply translating the image into a 8x8x1 3d grid directly and feeding it to a marching cubes algorithm. 3d achieved, but very blocky.

Following my usual greedy approach to prototypes and game jams, instead of doubling back and getting metaballs to work, I decided to do some mesh smoothing algorithm on top of the generated mesh.

I simply threw in this mesh filter from the Unity Wiki, and it improved the shapes quite a bit. However, the organs were still totally flat. I simply added some random offset in the Y direction for all the vertices that were in flat faces (i.e: had the normals pointing straight up or down).

Well would you look at that. Granted, it is mostly pareidolia, but I can totally see a spleen, a lung, a liver… a spleen… and… well, you know, organ-like shapes.

Then the time came to texturing, and I again tried to cut corners running away from unwrapping the UV on this thing. I did a bunch of experiments using 3d noise functions (you can get the noise shaders I ported from this repo in a handy, barely tested CGINC for Unity). They looked pretty cool (and I’ll probably use it somehow), but the cheap solution got really expensive when I had to generate normal maps out of it.

In the end, just a simple planar UV map (by plotting XZ vertex position normalized into UV space) was good enough, and I could use painted textures (which not only got me grossed out getting photo sources, but also RT’d by Polycount).

"What are you up to this saturday night?"

"Oh, you know, the usual, doing seamless viscera textures and stuff"#screenshotsaturday #gamedev pic.twitter.com/4ZktlppojH — Yanko Oliveira (@yankooliveira) November 5, 2016

Yay, sausages!

Ok, so we have the organs generated. But now we have to fit them into that good ole imp. The open chest was relatively easy: I just had to create an extra morph target on the imp model. But what about the organ positioning?

Originally I though that I’d have to create a very complex bounding algorithm to make sure everything “packed” together properly. In the end, just having spawn points attached to the bones of the creature was enough. But there’s still a problem: they’re clipping out of the body, as seen on the red circle areas.

Stencil buffer to the rescue! We can use the stencil buffer to “mask” a region of the chest, and only render organs in that area.

The mask mesh (shown in green) is perfectly flat, and you can spot the borders if you pay a lot of attention. But even moving, the overall effect is good enough.

Looking back, most of my solutions were maybe overkill and I could probably achieve a better result with a good artist handcrafting all those organs, but it was a fun experiment. In the end, I guess it was mostly about exploring this uncharted territory which, weird as it might be, I’m pretty sure wasn’t visited before.

Appendix A: on the Markov chain samples (added 03/12/2016)

A redditor asked me:

“So the next step was encoding the image in a group of integer 2d positions for all the white pixels.” So what does this mean in relation to a markov chain? Like, pixel at 4,5 is “followed by” a pixel at 5,5 if it’s white? But if 5,5 is black, then it checks 6,5, then 7,5 etc. until it finds a white pixel, and that’s the pixel that follows 4,5?

My old uni studies on Markov chains are ultra rusty, so bear with me if I say something wrong. But basically the samples that I have are converting only white pixels to a list of coordinates, per row

private List<IntVector2> ConvertToVectorList(Texture2D texture2D) { List<IntVector2> points = new List<IntVector2>(); for (int y = 0; y < texture2D.height; y++) { for (int x = 0; x < texture2D.width; x++) { if (texture2D.GetPixel(x, y) != Color.black) { points.Add(new IntVector2(x, y)); } } } return points; } 1 2 3 4 5 6 7 8 9 10 11 12 13 private List < IntVector2 > ConvertToVectorList ( Texture2D texture2D ) { List < IntVector2 > points = new List < IntVector2 > ( ) ; for ( int y = 0 ; y < texture2D . height ; y ++ ) { for ( int x = 0 ; x < texture2D . width ; x ++ ) { if ( texture2D . GetPixel ( x , y ) != Color . black ) { points . Add ( new IntVector2 ( x , y ) ) ; } } } return points ; }

This means that the Markov chain will start from a given coordinate, then get the next coordinate that is most likely to appear. It simply has a bunch of white pixel coordinates as samples, and outputs a bunch of white pixel coordinates in the end – the black pixels aren’t considered at all. Actually, pixels don’t exist at all in the chain’s context; it’s just a list of points.

If we converted every possible coordinate in a single letter (e.g.: (0,0) is a, (0,1) is b…), I’m pretty sure it would have the same results. I just fortunately came along a generics-based Markov Chain and could directly input this structure.

But the question leads to some interesting other questions: what exactly would happen if I encoded everything in black pixels, instead of white? Or what if I generated the sample list column-based instead of row based? I’m pretty sure if I hadn’t slept through those advanced statistics classes I wouldn’t have to code it to see – but I’m pretty sure if they actually had experiments like this I wouldn’t have slept so much. Sorry, Professor Aguiar! 😀