SIGGRAPH 2018 Papers

Machine Learning, Graphics, and Rendering

While the SIGGRAPH 2018 talks and exhibitor sessions were dominated by ray tracing, research was skewed toward machine learning.

The papers selected below are even more heavily biased toward machine learning because this is my personal list of papers of interest and I’m trying to deepen my understanding of machine learning.

Favorite Papers

The following is a list of my favorite papers at the conference. Generally, I chose the papers in this list because I felt the result, performance, or intuition behind the paper was impressive.

[ link video ] Non-Stationary Texture Synthesis by Adversarial Expansion . There was some buzz around the conference regarding this paper, as it beautifully synthesizes textures with non-trivial patterns.

. There was some buzz around the conference regarding this paper, as it beautifully synthesizes textures with non-trivial patterns. [ link video ] Noise2Noise: Learning Image Restoration without Clean Data . Originally presented at ICML, Noise2Noise made multiple appearances in SIGGRAPH 2018 talks. The intuition is that a network can be trained with pairs of noisy images (rather than noisy+clean) to learn a function which produces a clean image given a noisy image. The result is an order of magnitude reduction in the computational cost of generating datasets for image de-noising research. Further, it generalizes to non-zero-mean noise, such as text and other image artifacts.

. Originally presented at ICML, Noise2Noise made multiple appearances in SIGGRAPH 2018 talks. The intuition is that a network can be trained with pairs of noisy images (rather than noisy+clean) to learn a function which produces a clean image given a noisy image. The result is an order of magnitude reduction in the computational cost of generating datasets for image de-noising research. Further, it generalizes to non-zero-mean noise, such as text and other image artifacts. [ link video ] Efficient Rendering of Layered Materials Using an Atomic Decomposition with Statistical Operators . Layered materials, rendered efficiently in Unity using extremely math math.

. Layered materials, rendered efficiently in Unity using extremely math math. [ link ] Deep Convolutional Priors for Indoor Scene Synthesis . Builds a prior distribution of where objects might be placed in a room based on an iterative approach. I added this to my favorites list for its potential practical applications.

. Builds a prior distribution of where objects might be placed in a room based on an iterative approach. I added this to my favorites list for its potential practical applications. [ link video ] tempoGAN: A Temporally Coherent Volumetric GAN for Super-Resolution Fluid Flow . Restricted to 4x upsampling, but great temporally stable result with aesthetically pleasing artifacts. Reduces simulation time from hours to single-digit minutes, reduces time complexity to linear scaling, and enables parallel execution of the simulation.

. Restricted to 4x upsampling, but great temporally stable result with aesthetically pleasing artifacts. Reduces simulation time from hours to single-digit minutes, reduces time complexity to linear scaling, and enables parallel execution of the simulation. [ link ] Single-Image SVBRDF Capture with a Rendering-Aware Deep Network. Generate materials with a single image from a cell phone. Quality was improved by using a differentiable renderer to formulate the loss function in rendered image space, while retaining the ability to back propagate through the network.

Papers Fast Forward Picks

SIGGRAPH had a new feature in the app this year to flag papers during the fast forward. I flagged the following list of papers for future reading material.