Ray tracing is a rendering technique that’s hovered just over the horizon for decades. It was boosted to prominence (and tremendous controversy) when Intel announced Larrabee in 2007 and

proclaimed that the chip would deliver a digital Holy Grail — real-time ray tracing (RTRT) at playable framerates in modern games. Much was written. AMD and Nvidia actually agreed on something for the first time in living memory and publicly denounced the idea (at Nvision 2008, one analyst declared that Larrabee was “like a GPU from 2006.” When Intel announced that it was canceling Larrabee and refocusing the project on HPC workloads, ray tracing faded back in the shadows.

But not entirely. Bit by bit, the idea is creeping out again, in both professional and consumer contexts — and spearheaded mostly by Nvidia. On the professional side, Nvidia has built its own software solutions (OptiX, iray) and assisted in the development of several others (Arion, Lightworks, and Octane Render). Some rendering plug-ins, like V-Ray RT, have added support for GPU-based ray tracing in addition to their CPU rendering capabilities. On the consumer side of the equation, Epic’s Unreal Engine 4 supports a new lighting technique, dubbed voxel cone tracing, that offers some of ray tracing’s advantages while avoiding its weak points.

Nvidia has begun talking about GPU ray tracing as a technology that’s available to developers and could be deployed alongside cloud gaming. We decided to dig into the issue, take a look at current capabilities, and explore where the technology might take us

Of rays and rasters

The vast majority of computer graphics today are drawn using a technique called rasterization. In this context, rasterization is the process of rendering a 3D scene on a 2D monitor by drawing polygons (shapes). Lighting and shadows are applied by evaluating each pixel and calculating its visibility from a given light source. An extraordinary amount of calculation goes into these models, but they’re ultimately simulations. In a game with a day/night cycle, the sun and moon are there for the player’s benefit, to make the game seem realistic; there’s no actual light rays being flung from the star overhead.

Ray tracing, in contrast, models the light rays themselves. When rays strike an object in the scene, they can be absorbed, reflected, refracted, or the object might fluoresce. Rays can be fired from the eyes of the viewer, other light objects, or both. The advantage of this approach is that reflections/refractions are naturally created as part of the simulation; highly complex reflections of reflections can be generated by allowing a ray to bounce multiple times after it intersects a surface.

One of the problems with the original ray tracing algorithm is that it creates sharp, aliased lines that can’t be dealt with in the normal fashion. Also, the test for whether or not a pixel is lit or shadowed is binary, a problem that leaves the final image unrealistically sharp and defined. Other ray tracing techniques have been invented to deal with some of these problems such as beam tracing, cone tracing, and distributed/stochastic ray tracing.

Intel’s Larrabee put ray tracing front-and-center, but the argument that ensued over which approach was better fundamentally mischaracterized the situation. Rasterization might be a “trick,” but the entire history of special effects on film and in games is the history of developing illusions that fool the eye. Oftentimes, reality is simply too expensive to model. Rasterization proponents have leveled this charge at ray tracing in the past, claiming that ray tracing-like levels of detail can be reproduced in a fraction of the time.

Consider the following model, as rendered using ray tracing and by a point cloud ambient occlusion (AO) technique (rasterization). At first glance, you may not see a difference.

On closer examination, it’s clear that the two are shadowed differently. The ray traced model has a quality edge on the AO version, but it’s not enormous — at least, not to the human eye. Here’s the technical difference between a pixel-perfect comparison of the two models, as shown by AMD’s The Compressonator tool:

The point here is that the actual difference between the two models is significantly smaller to the human eye than to computer analysis. This might seem to leave ray tracing forever lagging rasterization, especially considering the performance differential and the degree of research that’s been poured into the latter.

Writing ray tracing off, however, is a serious mistake. First, there’s the fact that ray tracing naturally creates some effects, like refraction, that are extremely expensive to realistically model using rasterization. Second, any comparison between the two implies that they’re at equal stages of development. This simply isn’t true — companies like Nvidia have poured hundreds of millions of dollars into building hardware-accelerated rasterizers and designing software to run on them. Dedicated ray tracing hardware, in contrast, is in its infancy.

Next page: The current state of ray tracing