Many programmers appreciate being able to see their code render something interesting to the screen. For a while I’ve wanted to write a texture painter, where I can import a model, paint colors on it, and then export those textures back to a file.

I’m using OpenGL in my code, but I’ll focus on the actual mechanics and less on the language or code.

Coordinate Spaces

I’ll start with a very brief overview of coordinate systems and moving between them, which is really important in computer graphics.

Look at the graphic below, which shows two imaginary people standing on a coordinate plane, which we’ll call “world space”. In this world space Tom is standing on (3,0) facing down the y-axis, and Mary is standing on (0, 3) facing down the x-axis.

Each person from their perspective might have their own coordinate space. Assuming to their right is their x-axis and straight ahead is their y-axis, their coordinate systems would look something like this:

Now let’s say we place some chocolate mouse in this world (see image below). We can actually describe where the dessert is in three different coordinate spaces. In world space it’s at (1,3). From Mary’s perspective it’s at (0,1), and from Tom’s perspective it’s at (-2,2). All of these are totally valid positions, but here the context of the coordinate system is just as important as knowing the actual coordinates.

Moving Between Spaces

In this world, we can actually write functions that can take one set of coordinates and transform them to another coordinate system. Let’s start with going from world space to Tom’s coordinate system. Because the world system and Tom’s system have the same axis, we can simply offset the coordinates to calculate the new position. Simply by subtracting (3,1) from any world-space coordinate will provide the position from Tom’s perspective.

Using the mousse as an example, it’s at (1,3) in world space, which according to this function would be (-2,2) in Tom’s space.

We can write a similar function to convert world space coordinates to Mary’s perspective. This is a little trickier because Mary’s coordinate space isn’t aligned, so we’ll need to do a rotation and a translation. Let’s start with a rotation matrix.

We actually know θ is -90 degrees, so we can apply that value and simplify the function to:

Trying a few points in world space, (0,3) would become (3, 0), which is as if we spun in 90 degrees clockwise. Trying (4,-1) would become (-1, -4). That puts any world coordinate into the same orientation as Mary’s, but is still not aligned. We still need to do a translation as we did for Tom. I’ll actually skip this step for brevity, but note there is a function that can transform one coordinate system to another.

The same thing applies in computer graphics, where often we’ll need to convert between coordinate systems. For example, a point on a texture would want to know where in screen coordinates it lies from a camera’s perspective.

Texture Mapping

Texture mapping is an old technique to map a surface (in our case) to part of a texture image. These values may be used for a variety of purposes, but commonly they’re just associated with the diffuse color of the material. The most common form is UV mapping, where each vertex of a model is mapped to a point on image, and when drawing polygons the texture value can be looked up by interpolating between these values.

Rendering Perspective and UV View

While it’s entirely possible to paint on the textures directly, it would often be unintuitive, painstaking, and error prone. As an example of the cube below, rolling out its UV mapping an artist would have to draw details along the edges and hope they match when the polygons are aligned.

left image model rendered using world positions, right image model rendered using its UV values as positions

While there are several ways to paint on models, I’ll talk about a technique suitable for models with existing UV coordinates. In the image below there’s a simple quad rendered in a perspective view. Over the top of that is the paint buffer, an FBO that I’ve drawn brush strokes onto. In this mode the two aren’t really interacting, and the paint values and the model behind it are uncoupled.

paint buffer debug mode

When painting in normal use, however, an artist only wants to see the paint applied to the model itself. The do this, the FBO is instead used as a lookup texture when rendering the model. If a given pixel in the viewport has been painted, the model’s fragment shader will use the brush color to override the value in its texture.

the brush is set to green and the paint values cross directly over the mesh

“Baking” to the Texture

The values in the paint buffer are only valid from the camera’s perspective and do not make sense when that perspective changes.

changing camera position without baking the values to the texture

When painting from a given perspective, the paint needs to be applied or “baked” onto the texture before changing the camera. A separate pass renders the scene from the UV perspective. When rendering the individual polygons, each vertex has knowledge of both its world position and UV position. The UV position is used to determine the polygon positions on the texture map. A transform function using that camera’s properties maps the world position to the paint buffer to determine whether that part of the polygon has been painted.

the bake shader renders polygons in “UV space”, but uses world position to lookup the paint value from the camera’s perspective

When baking in the “UV View” we still need to apply a transform function to change the world space to the camera’s orthographic space when looking up the paint buffer value since the UV View can zoom and translate around.

Here’s what the bake shader (in GLSL) actually looks like:

#version 330 uniform sampler2D meshTexture;

uniform sampler2D paintTexture;

uniform vec4 brushColor;

uniform vec2 targetScale;

in vec2 meshUv;

in vec4 cameraPos; void main() {

// convert the UV position to the camera's screen

// position so we can do the texture lookup

vec3 screenPos = 0.5 * (vec3(1,1,1) + cameraPos.xyz / cameraPos.w);

vec3 paintUv = vec3(screenPos.xy * targetScale, screenPos.z); // get paint intensity from screen coordinates

float paintIntensity = texture2D(paintTexture, paintUv.xy).r; // we overwrite the mesh texture every time, so the final

// color is a blend of what was already there and what has

// been painted

vec4 meshColor = texture2D(meshTexture, meshUv);

vec3 diffuseColor = mix(meshColor.rgb, brushColor.rgb, paintIntensity); gl_FragColor = vec4(diffuseColor, 1);

}

That’s it though!

What’s Next

One of the biggest problems with the above approach is that it doesn’t take occlusion into account. When painting on a cube, for example, drawing on the front-facing faces will also apply that color to the back-facing faces. This is often not desired.

painting through a model

When doing the paint buffer lookup, the bake shader has no context what part of the model is at that location. When baking the back-facing polygons they see the same paint values as the front-facing polygons in that region. For a simple cube we can avoid this by not baking back-facing polygons, but that doesn’t help with more complicated geometry, where front-facing polygons can overlap. In my next post I’ll discuss a couple of strategies to avoid painting through the model.

Source Code

The painting code for the animations above is available on Github. It’s written in C++, OpenGL, and Qt5. It currently lacks quite a bit of documentation, but can be built in Qt Creator and assimp (for loading the 3D models).