Reducing the memory consumed by a framebuffer is often useful in modern rendering, as the advent of HDR techniques both scales memory usage and can often steal an alpha channel, preventing correct blending. Here's a method for reducing the number of color channels required from 3 down to 2, thus allowing space to either be reclaimed or used to obtain two spare alpha channels.

Introduction

People often complain that next-gen games have an "everything is brown" look to them . So I figure, if it is indeed the case that the game is not making full use of the full RGB spectrum, why not try and take advantage of that to optimize rendering accordingly?

Many graphics cards don't offer very good support for HDR render targets, so a lot of games try to use some sort of packed LDR format to store data . One popular solution is to pack large values into existing 32-bit RGBA render targets, for example by converting into a YUV representation, then storing Y split across two color channels .

The downside to the LogLUV approach and others like it, is that it consumes the alpha channel. This makes it hard or impossible to achieve alpha-based transparency.

So, I figured you could do a rendering system using only two color channels (which I call ST ), instead of the standard RGB. With only two channels to deal with, it becomes possible to still fit data into a 32bpp render target, while still allowing the pixel shader to output a conventional alpha value.

Two-channel rendering

Rendering with only two color channels has been attempted before, on a small number of occasions. Most notable is the long-abandoned two-strip Technicolor™ system, invented in the 1920s . In this the camera filmed onto two black-and-white films, one using a red filter and the other using a blue/green filter. The full-color image was effectively reconstructed by duplicated the blue/green channel into both the blue and green channels at playback.

Jim Blinn used a two-channel system for compressing planetary texture data in 1980 . His system used one channel for brightness and one for saturation, assuming the hue to be constant.

My idea is basically similar to the Technicolor approach, except I automatically tweak the filter coefficients to optimize for a given reference image.

Implementation

We first define our two color channels (ST) like so:

S = clamp( R×c0 + G×c1 + B×c2 ) T = clamp( R×c3 + G×c4 + B×c5 )

This transform can be applied at the end of the pixel shader, outputting S and T into two of the final framebuffer channels. We can later reverse the transform as a post-process effect, using the following:

R′ = clamp( S×c6 + T×c7 ) G′ = clamp( S×c8 + T×c9 ) B′ = clamp( S×c10 + T×c11 )

The only question remaining is how to decide the filter coefficients. I wrote a small off-line utility which would take a reference image, and try to optimize the coeffients for it. It tries various random values and keeps the set which has the least RMS error when applied. Once an optimal filter set is chosen for a given scene, the coefficients can be written out for use at runtime.

Results

Here's some pictures of Gears Of War, showing before and after.

Image 1

S = RGB • [0.5 0.3 0.3], T = RGB • [0.1 0.0 0.8]

R′ = ST • [1.8 -0.8], G′ = ST • [0.8 0.2], B′ = ST • [-0.2 1.2]

Image 2

S = RGB • [-0.4 0.1 1.3], T = RGB • [0.4 0.2 0.4]

R′ = ST • [-0.7 1.7], G′ = ST • [-0.1 1.1], B′ = ST • [0.5 0.5]

Image 3

S = RGB • [1.7 -0.4 -0.3], T = RGB • [0.2 0.0 0.8]

R′ = ST • [0.7 0.3], G′ = ST • [0.3 0.7], B′ = ST • [-0.1 1.1]

References