We don’t cover a lot of technical papers on CG Channel, but the demo for Kevin Karsch’s work on rendering 3D models into archive photographs was just so darn cool, we knew we had to make an exception.

The video shows 3D models being match lit into photographic backplates with little more user input than to define vanishing points and draw round light sources in the image.

No scene geometry is required, and nor are measurement data or photos from alternate viewpoints.

The results include advanced illumination feature such as colour bleeding and can even be animated – check out the part at 02:20 in which virtual pool balls are added to a photo after outlining the cushions of the table.

The system even mimics shafts of light – simply by drawing two bounding boxes on the image.

According to the voiceover, the algorithm Karsch and his colleagues have developed estimates the albedo and direct lighting in the scene semi-automatically, then refines the lighting model according to the user input.

Karsch – a PhD student at the University of Illinois at Urbana-Champaign – is presenting the research at Siggraph Asia. Definitely one to check out if you plan to attend the show.

Read the full paper on Kevin Karsch’s homepage

Read a longer discussion of the research on CGTalk

Tags: demo, Kevin Karsch, lighting, rendering, rendering synthetic objects into legacy photographs, research, tech