A lot of fur rendering systems were designed for human hair, and that's a problem. Fur fibers have a larger central section -- aka, the medulla -- that scatters light differently, giving a soft, yet glossy appearance. Current renderers don't look at the medulla, but merely consider how light bounces from one fur fiber to the next. As a result, they have to do a lot of number-crunching and tend to be slow.

The UC researchers instead used a concept called subsurface scattering to see how light ricochets around and through translucent fur medullas. To understand the principle, shine a smartphone's flashlight through your finger in a dark room. "You will see a ring of light, because the light has entered through your finger, scattered inside and then gone back out," the UC team explained.

Applying subsurface scattering to fur is a thorny mathematical problem, though, so the UC team turned to a neural network. After being trained on just a single scene, the AI was able to apply subsurface scattering to a variety of other scenes, including wolf, raccoon and hamster models.

The results are a clear improvement, and the technique works equally well for hair. The team is now shooting for real-time fur rendering, which could be extremely useful for game designers who want to introduce more realistic animals. Fur-covered Sonic or Pikachu, anyone?