There have been previous attempts at AI-guided fills, but they've typically been limited to rectangular sections, have focused on gaps near the middle of the picture and haven't scaled well to missing photo data of different sizes. NVIDIA's "partial convolution" approach, which guarantees that the output for missing pixels doesn't rely on the input values, can work with holes of any shape, size or location. That, in turn, produces uncannily realistic results in many cases -- even if the AI doesn't know exactly what's missing, the result usually looks like it fits. Previous methods tended to produce obvious glitches.

NVIDIA trained its system by generating tens of thousands of hole variations and making the AI learn how to reconstruct photos. It tested using a different set of holes to ensure the AI genuinely understood how to restore photos on its own.

The results aren't always flawless. You may see a facial feature clearly borrowed from someone else, and it's bound to struggle if the hole is so large that there isn't enough information to create a plausible reconstruction. But what's here could still be incredibly useful. You could repair seemingly hopeless images without hours of painstaking reconstruction. The scientists also envision the AI helping to upscale images without losing sharpness. In effect, you'd only ever have to worry about touching up minor details -- the days of recreating whole segments from scratch might soon be over.