The tech is described in a research paper submitted to SIGGRAPH 2019, and you just have to admit that the results are impressive.

They trained the network by capturing photos of 18 people under different directional light sources in a studio. The team noted that each person was captured from 7 angles as “a densely sampled sphere of lights” illuminated them from every angle.

The researchers demonstrate this using several different photos starting around the 1:55 mark in the video up top. Then, starting around 4:20, they show how the

It might remind you of the Portrait Lighting feature on iPhones, but this project doesn’t use a depth map or any other data beyond a basic RGB image. The tech “can produce 640 × 640 images in only 160 milliseconds.” They noted that their “model may enable compelling consumer-facing photographic relighting applications.”

You can get more details here.