Here’s an updates on what we’re working on in Cycles. Up to now, the features that we’ve added for the 2.64 release are BVH build time optimizations, exclude layers, motion/UV passes, filter glossy option, a light falloff node, a fisheye lens (by Dalai Felinto), ray length access (by Agustin Benavidez), and some other small things.

There’s also now a page in the Cycles manual about Reducing Noise in renders. All of these tricks are used in production with other render engines and should apply to path tracers in general.

Currently there’s still three major features on my list to implement: motion blur, volumetric rendering and a contribution pass. We already have a vector pass now for doing motion blur in compositing, but real raytraced motion blur would be nice as well. The code for this is mostly written but work is needed to make this faster, currently it’s slowing down our raytracing kernel even if the there are no motion blurred objects.

Volumetrics should be a target for 2.65, so the release after the one we’re working on now. There’s already a developer who has patches to add volumetrics to Cycles, we’ll need to review the design and add support for rendering smoke and point cloud datasets. Currently volumetrics are being rendered with Blender Internal, and probably in production they end up in separate layers anyway, but it’s not very convenient to mix render engines and have to switch back and forth.

The idea for the contribution pass is that’s it’s like the only shadow material in Blender Internal, but more flexible and interacting with indirect light, to help compositing objects into footage. How this will work exactly is unsure still.

Everything else is related to optimizing performance in one way or another, either by making things simply render faster, reducing noise levels or adding tricks to avoid noise. Baking light to textures for static backgrounds may be added too, but if at all possible I’d like to avoid this. Most of the optimizations we are looking at will be CPU only, on the GPU there’s not as much that we can do due to hardware restrictions. Some directions we will look in:

Improving core raytracing performance (SIMD, spatial splits, build parameters).

Decoupling AA sample and light sample numbers. Currently one path is traced for each sample, but depending on the scene it might be less noisy to distribute samples in another way.

Better texture filtering and use of OpenImageIO for tiled image cache on the CPU, so we can use more textures than fit in memory.

Texture blurring behind glossy/diffuse bounces. Like the Filter Glossy option, this can help reduce noise at the cost of some accuracy, especially useful for environment maps or high frequency noise textures.

Non-progressive tile based rendering, to increase memory locality, which should avoid cache misses for main memory and the image cache.

Adaptive sampling, to render more samples in regions with more noise.

Better sampling patterns. I’ve been testing a few different ones but they couldn’t beat the Sobol patterns we use yet when using many samples (> 16), still hope we can find something here.

Reducing memory usage for meshes (vertices, normals, texture coordinates, ..).

Improve sampling for scenes with many light sources (just rough ideas here, not sure it will work).

Hopefully these changes together with careful scene setups will be sufficient to keep render times within reason, but beyond that we can still look at things like irradiance caching or other caching type tricks. I hope to avoid these because they don’t extend to glossy reflections well, I’d like to try to keep things unbiased-ish, it seems to be the direction many render engines are moving in, and it’s easier to control, maintain and parallelize over many cores.