Maybe you’ve heard of Reversed-Z:

It’s a pretty good way to get more precision out of your depth buffer. Very useful for games with long view distances, like for example Just Cause 2 (as shown in a link above).

So, how to use it in OpenGL? Here’s a no-nonsense step-by-step guide.

Step One: Set Clip Space Conventions

Reversed-Z is designed for clip-space Z values in the range [0,1], not [-1,+1]. OpenGL’s default convention is [-1,+1], but you can override that using glClipControl:

glClipControl(GL_LOWER_LEFT, GL_ZERO_TO_ONE);

I recommend sticking this line of code at the start of your program, and never changing the clip conventions after that. Give up entirely on the [-1,+1] convention, it’s hands-down objectively worse than the [0,1] convention when it comes to precision. It’s a good decision even if you’re not using Reversed-Z. Anyways, when you sample from a depth texture in OpenGL, you already get a value between 0 and 1… so switching to the [0,1] convention for clip coordinates will make everything more consistent.

glClipControl is an OpenGL 4.5 feature. If you don’t have OpenGL 4.5, it might still be available as an extension (see: hardware supporting GL_ARB_clip_control). Therefore, you could use something like the following code snippet:

GLint major, minor; glGetIntegerv(GL_MAJOR_VERSION, &major); glGetIntegerv(GL_MINOR_VERSION, &minor); if ((major > 4 || (major == 4 && minor >= 5)) || SDL_GL_ExtensionSupported("GL_ARB_clip_control")) { glClipControl(GL_LOWER_LEFT, GL_ZERO_TO_ONE); } else { fprintf(stderr, "glClipControl required, sorry.

"); exit(1); }

Step Two: Create a Floating Point Depth Buffer

The whole Reversed-Z thing is designed for floating point depth, not fixed point. That means you should be using a floating point depth buffer.

You can follow the FBO setup code below as an example:

int width = 640, height = 480; GLuint color, depth, fbo; glGenTextures(1, &color); glBindTexture(GL_TEXTURE_2D, color); glTexStorage2D(GL_TEXTURE_2D, 1, GL_SRGB8_ALPHA8, width, height); glBindTexture(GL_TEXTURE_2D, 0); glGenTextures(1, &depth); glBindTexture(GL_TEXTURE_2D, depth); glTexStorage2D(GL_TEXTURE_2D, 1, GL_DEPTH_COMPONENT32F, width, height); glBindTexture(GL_TEXTURE_2D, 0); glGenFramebuffers(1, &fbo); glBindFramebuffer(GL_FRAMEBUFFER, fbo); glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, color, 0); glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, depth, 0); GLenum status = glCheckFramebufferStatus(GL_FRAMEBUFFER); if (status != GL_FRAMEBUFFER_COMPLETE) { fprintf(stderr, "glCheckFramebufferStatus: %x

", status); } glBindFramebuffer(GL_FRAMEBUFFER, 0);

Side tip: Copying a Framebuffer to a Window

If you didn’t know, you don’t need to create a depth buffer when you create your window. You can render your scene (and depth buffer) into an offscreen FBO, then copy to your window only at the end. For example:

glBindFramebuffer(GL_FRAMEBUFFER, fbo); // TODO: Render scene glBindFramebuffer(GL_FRAMEBUFFER, 0); glBindFramebuffer(GL_READ_FRAMEBUFFER, fbo); glBindFramebuffer(GL_DRAW_FRAMEBUFFER, 0); // default FBO glBlitFramebuffer( 0, 0, fboWidth, fboHeight, 0, 0, windowWidth, windowHeight, GL_COLOR_BUFFER_BIT, GL_LINEAR); glBindFramebuffer(GL_FRAMEBUFFER, 0);

With this approach, you can also be rendering your scene at a lower resolution than your window, since the call to glBlitFramebuffer will scale up your rendering. You might find this useful if you’re on a 4K display, but still want to render at a lower resolution for improved frame rate or battery power savings. This also saves you the trouble of creating a window in SRGB mode, which takes away further complexity from your window management code.

You can also use glBlitFramebuffer to convert a multi-sampled framebuffer into a single-sampled framebuffer, but you’re not allowed to do scaling simultaneously with resolving multi-samples in a call to glBlitFramebuffer, so you might have to call glBlitFramebuffer twice: Once to resolve multisamples, and once again to copy and scale up to your window. At this point, it might become interesting to do all this through a single fragment shader instead. You can simultaneously resolve and scale if you use the GL_EXT_framebuffer_multisample_blit_scaled extension (see: hardware supporting GL_EXT_framebuffer_multisample_blit_scaled).

Anyways, back to the whole Reversed-Z thing…

Step Three: Clear your Depth Buffer to Zero

With a normal depth test, you might have been clearing the depth buffer to 1, since that was the “far” value. When you call glClear(GL_DEPTH_BUFFER_BIT), it clears the depth buffer using the last set value of glClearDepth, which is 1 by default. On the other hand, with Reversed-Z, the “far” value is now 0, so you have to use the proper clear depth value:

glBindFramebuffer(GL_FRAMEBUFFER, fbo); glClearDepth(0.0f); glClear(GL_DEPTH_BUFFER_BIT);

Step Four: Flip your Depth Comparison to GREATER

As implied by its name, using Reversed-Z means that far depth values are represented by smaller numbers. That means you need to switch your glDepthFunc from GL_LESS to GL_GREATER, as shown in the code below. In this example code, I also reset the comparison and depth state back to OpenGL defaults, so the state doesn’t leak into other code that might not be doing Reversed-Z, or that might not be using depth testing.

glEnable(GL_DEPTH_TEST); glDepthFunc(GL_GREATER); // TODO: Draw your scene glDepthFunc(GL_LESS); glDisable(GL_DEPTH_TEST);

Step Five: Update your Projection Matrix

Reversed-Z requires a slightly different projection matrix than what (for example) gluPerspective creates. What follows is some code you can use to create this new projection matrix. It uses the “right-handed” coordinate system convention, meaning that the Z axis points out of your computer screen. Alternatively, that means the objects in front of your camera have a negative Z value in view space. This is the OpenGL convention.

Note that this matrix doesn’t only reverse the Z, it also sets the far plane to infinity, which works well for extremely large view distances. You can find the derivation of this matrix in the following article: http://dev.theomader.com/depth-precision/

glm::mat4 MakeInfReversedZProjRH(float fovY_radians, float aspectWbyH, float zNear) { float f = 1.0f / tan(fovY_radians / 2.0f); return glm::mat4( f / aspectWbyH, 0.0f, 0.0f, 0.0f, 0.0f, f, 0.0f, 0.0f, 0.0f, 0.0f, 0.0f, -1.0f, 0.0f, 0.0f, zNear, 0.0f); }

You can call this code as follows:

glm::mat4 proj = MakeInfReversedZProjRH(glm::radians(70.0f), (float)width/height, 0.01f); glUniformMatrix4fv(PROJECTION_MATRIX_LOCATION, 1, GL_FALSE, value_ptr(proj)); // TODO: glDraw*()

Note that I’m referring to the OpenGL Mathematics library “glm”, which by OpenGL convention uses column-major order for the arguments of its constructor. That means the first 4 inputs to the constructor are actually the first column of the matrix. Might be important if you’re translating to a different matrix library.

That’s it?

That’s it!

In Summary

Set your depth clip conventions to [0,1] using glClipControl. Create and use a floating point depth buffer. Clear your depth buffer to 0 instead of the default of 1. Use GL_GREATER instead of GL_LESS for your depth test. Use a projection matrix that flips the depth.

What about DirectX?

There’s almost no difference in implementing this in DirectX. First, you don’t need glClipControl because the [0,1] convention is already the default in DirectX. Second, if you’re following DirectX conventions and using a “left-handed” convention for the view, then just turn the “-1” into “+1” in the projection matrix. You also need to do this in OpenGL if you’re using left-handed conventions, meaning that the camera’s Z axis goes into your screen, or equivalently that you consider objects to be in front of the camera if their Z value in view space is positive.