The digital painting tool GANpaint has gone viral on social media. The product of a team of high-profile researchers from MIT, IBM, Google, and the Chinese University of Hong Kong, GANPaint allows anyone — even those with little knowledge of digital painting or photoshop — to “paint” incredibly complex and detailed photorealistic scenes.



Decades ago, Microsoft Paint amused many an early PC owner. The GANPaint user interface is as simple as Paint’s, but instead of brushing a coat of red paint across the frame, users can select for example the “tree” brush and use it to paint a forest with just a few mouse clicks. Moreover, when the “door” brush adds a door to a building, it does so in a style consistent with the building’s architecture and other indicators in the scene. GANPaint also allows users to identify and remove objects from a painting using the same technique. Below is the video demonstration. Click this link to try GANPaint yourself.

Social media hailed the revolutionary painting tool with posts like “For the first time in my life, I have been able to ‘draw’ things that look like things” and “not aware of any other web-based tool like this.”



GANPaint originates from the paper GAN Dissection: Visualizing and Understanding Generative Adversarial Networks, which proposes a framework for visualizing and understanding the structures learned by generative adversarial networks (GANs) — an exciting new AI technique capable of generating photorealistic images.



Academia remains bewildered by all the impressive result GANs can achieve. While an increasing number of papers are studying and advancing GAN performance, it remains a mystery why one GAN variant will work better than another.



The paper’s method is straightforward: Researchers first use a segmentation network and a dissection method to identify a group of units in a pre-trained GAN model that can match specific objects like trees, clouds, doors; then control these units by activating or deactivating to see how the model will respond; and finally insert these units to other locations and see how the new synthesized objects can be compatible with their surroundings.



The paper’s authors present a few surprising findings:

The same units correlating to a specific object class can compose the same objects but with totally different appearances. The GAN wants objects to be generated in the right place. For example, it will reject attempts to draw a door in the sky or on a tree. The GAN can identify the specific sets of units that cause visual artifacts and remove them.

The paper pertains to a series of recent research efforts aimed at adding to the interpretability of black-box deep learning technologies. Following visualization work geared towards CNN and RNN, this new research marks a step forward in the important task of building a comprehensive understanding of GANs.



GAN Dissection: Visualizing and Understanding Generative Adversarial Networks is available on ArXiv and its code is open sourced on Github.