I used Blender to create all the graphics in the game Quantum Derail. The game renders 2d images and sprites on HTML5 canvas, but I made these graphics by pre-rendering 3D geometry on Blender.

I had to figure out a number of things, and I thought I should share the workflow with others in case it is useful. Keep in mind that this is one of the many possible workflows, but it is one that has proven to work for me. This article will focus only on the workflow to produce the visuals for a point&click adventure game, not on the code needed to run the game. For the coding part I wrote a separate article.

I should mention first that I rendered everything with cycles, although the style is a bit cartoon-ish, but with realistic lighting.

Backgrounds and foregrounds

I’m going to focus first in the non-animated layers of the adventure game. The static background and foreground elements, which define every room of your game.

Background (with foreground included):

Foreground:

One of the things that you need to think first is which elements are rendered behind the player avatar (backgrounds) and which ones are rendered on top (foregrounds). You can have more layers, and have some that show sometimes behind and sometimes on top, but that complicates the code a bit, so think carefully if you want to deal with that.

What you can’t have is a single object with parts that show behind the avatar and parts that show on front. If you want to create that visual effect, you have to break the object into separate pieces, because the foreground and the background parts have to be rendered separately.

There are a number of ways you can organize the layers for rendering. A simple way is to use blender layers and switch them before triggering a render. I will discuss another approach later that supports automation (multiple scenes per file).

Another consideration is how to set up the camera. The best integration with moving sprites would be achieved by using an orthographic camera, but that would not be very interesting, so perspective cameras will look much better. You might need to make sure that the background doesn’t get too distorted with impossibly wide rooms, because towards the edges the avatar of the player will not be distorted in the same way and it will look terrible.

Sprites

If you are creating any kind of game chances are you already know what a sprite is. In essence, it is a sequence of images with transparent background that can be rendered in your game one after another to produce an animation effect. Like an animated GIF.

Depending on the constraints of your adventure game, you can balance between having sprites be specific to rooms or have them generic and reuse them in all rooms. For example, in my game I used generic sprites for the main character, that I reuse in every room, and specific sprites for other characters.

The reason it matters is lighting. If you want your sprites to be really well integrated into the room, you want to render them inside the room, with the same set of lights and backgrounds, so that light can bounce properly.

But if you (like me) are constrained by the size of the game, then you might choose to make some of the sprites generic, render them with neutral lighting, and hope they will somehow integrate in all the rooms where you use them.

See the difference in the following image. On the left, the protagonist of my game using the generic lighting, which doesn’t integrate at all with the dark lighting of this room. On the right, the same sprite rendered with integrated lighting:

Looping sprites that move

Stay away from sprites that move and loop as much as possible, because they are painful to set up in general. However, for the player avatar there is no way around that. Its walk cycle loops and has to move.

Here is what you need to do to have a moving sprite that loops nicely and doesn’t show glitches when the animation starts over:

You need to have the initial pose twice: at the beginning of the animation and again after the last frame of the loop. If you do this, you will be able to check that it looks identical, and the animation is going to loop nicely. I even recommend rendering both, so that you can debug glitches by comparing these images.

This is up to you, but if you want to avoid sliding feet, you might want to have the character physically move as it walks, so that you can check directly in blender that the feet are pinned while holding the weight. If you don’t care about sliding feet, you can make the character walk in place and avoid the following steps.

The camera has to follow the character as it walks. It may be tempting to leave the camera static and use the pixel movement to estimate how many pixels the character has to move per frame, but if you use a perspective camera, the character will be projected differently as it moves, and you will have glitches.

Same as the camera, the lighting rig also has to move as the character walks. Otherwise, the first and last frames will be lit slightly differently, and you will also have glitches.

Render in the room without the room

One of the problems you will have to solve is rendering your sprites inside the room (so that they get the correct lighting), but not render the room itself. That can be solved with mask layers. These layers are used for bouncing light, but they don’t display in the final rendered image:

Another problem is having transparent backgrounds. You can set that up in the main render options:

Shadows in sprites.

Another problem you will probably find is shadows. This applies both to generic sprites and specific ones, although on generic sprites it is a bit fake, since you don’t know where the lights will be.

In any case, this will be much much easier if you can use Blender 2.79 or later, because it introduces the concept of the shadow catcher. This option was introduced for easy integration of CGI objects over real images, but it is super handy for sprites too.

First, you need to create a clone of the parts of the background where you expect the sprite to cast shadows. I typically join them all in a single mesh, and simplify the mesh by removing parts far away, to speed up rendering. You might also need to move that mesh a bit so that it rests over the background objects, otherwise they might be occluded.

Then you enable shadow catcher in that cloned mesh:

The last part is to set up compositing nodes to mix the sprite with the shadow. You could also try and render the sprite and the shadow catcher in the same layer without using compositing, but it is very likely that you won’t like the results right away and it is really hard to tune this way. Here are the steps for the compositing:

Put the sprite in one render layer, and the shadow catcher in another. Create a compositing tree that combines both render layers as you wish.

Here is the compositing tree I’ve been using. I created a group node with the standard setup so I could import it in all the scenes:

Here is a brief explanation of that tree:

Character is the sprite that casts the shadow.

ShadowAlpha is the alpha channel of the shadow pass. That’s what we will use for computations, instead of the shadow map.

ShadowZ is the z channel of the shadow pass. It is required for compositing correctly with the background if the sprite or its shadow is partially occluded.

BuildingZ is the z channel of the stuff that can occlude your sprite. If nothing occludes it, just set it to a large value.

The brightness/contrast node is there just to fine-tune the shadow. If it is too strong you can tune it with the brightness, and if it is too large, you can tune it with the contrast.

The set alpha node produces a new translucent black shadow map.

Z Combine clears the parts of the shadow that are occluded.

The final Alpha Over node combines the sprite with the shadow into a single image.

Dynamic lighting of generic sprites

Dynamic lighting is a strong word, perhaps I should call this technique hacked lighting, because it is a big hack.

The avatar of the player uses generic sprites. That means I reuse the same sprites in every single room. But each room has different lighting, with significant changes in the color of light in some cases. If I use the exact same sprites unchanged in all rooms, it doesn’t integrate at all. See the images on the left in the following two examples:





The trick I used is to tint the sprites based on the color of each room.

Initially, I tried just extracting a key color for each room and tinting the sprite at runtime(multiplying pixels by that color). That already improved the integration dramatically. But there are a few rooms where there are strong lamps, and it is a bit sad that the character doesn’t get influenced when walking below those.

To improve integration even further what I did was extracting a lightmap for each room, directly from blender, and using that lightmap to tint the character dynamically, as it walks around the room (by reading the pixel in the lightmap located right between his feet).

Generating that lightmap is really easy, it is just a matter of adding yet another layer with a single large quad, and let it be influenced by lights and walls. I render it just like sprites, and then use an image editor (GIMP in my case) to extend it a bit so the avatar doesn’t go black if it walks too close to a wall.

Multiple scenes in the same file

This technique was used to address the following issues:

In some cases it was convenient to have more than one room in a single blender file. For example, for the front and back of the station. You can have multiple cameras in the same scene, but they must all have the same aspect ratio. That is a problem in adventure games with side scrolling, because some rooms might scroll more than others, and thus require a different aspect ratio. I wanted to be able to render the background and the foreground without having to manually switch layers, because I was making mistakes and wasting render times. Related to #2, I also wanted to automate rendering. That is, launch all the renders from the command line, so I can leave them running overnight.

One option that I tried for #1 was creating a master scene for the station model and then linking that scene from two other scenes for the front and the back specifics (the camera and aspect ratio). That quickly got annoying because I had to switch back and forth between scenes to do small tweaks. For multi-person teams where one person does modelling and another does rendering, that setup might be convenient, but for me it wasn’t.

Besides, that approach with linked scenes was also very annoying for foreground/background splits.

So in the end I learned that a single blender file can contain multiple scenes, and have objects linked internally. Each scene can have its own aspect ratio and cameras, even some specific objects if you need to optimize render times, so that was the approach I followed.

This approach proved so convenient that I ended having lots of scenes per file. Essentially, one scene for each image or sequence of images I wanted to render.

Automated rendering

I work in Linux, so I’m quite comfortable with Makefiles. What I did was to write a simple python script for blender that extracts the list of scenes and generates Makefile rules, and then just run make from the console to trigger all the renders. The advantage of using a build system is that it understands dependencies and will only re-render some image files if the blender file they depend on has changed. Given that it takes a full night to render all the graphics of Quantum Derail, it is important to avoid rendering scenes that weren’t modified.

Sprite sheets

If you are not familiar with this step, it turns the sequence of images produced by Blender when rendering sprites (you should output images, not video files) into a single image with all the frames inserted in a mosaic.

This part was the easiest. Turns out there is already a tool called montage (part of ImageMagick) that does most of the heavy lifting. Here are the useful command line options:

montage -geometry +0+0 -background none $filenames /tmp/__montage.html

That HTML file you specify at the end is an image map, and I just use it to know where the tool decided to place each frame, so I can use it from code. It would have been more convenient for me if it was JSON, but parsing that HTML is not that hard. Here is a regular expression you can use from nodejs:

javascript

const regex = /

About quantum derail

The game will be available from Feburary 15 2018 on itch.io. In the meantime you can check out the trailer and screenshots at https://quantumderail.com.

Did you find this article helpful? I could use your help to reach many gamers. When you support my daycause campaign, you will be posting an update in your social network on the day of the launch at the same time as everyone else. Let’s make a big splash!