To use the Neural Photo Editor, you select the "contextual paintbrush," choose a color and start mousing the part you want to change. The system can recognize if you're on the model's hair, for instance, and intelligently fill in the area, changing the color to match your brush. In another example, the user paints over the subject's mouth with a white brush to make their smile bigger.

Brock says that basic system works well if you use system-generated images, but falls apart with existing photos. However, by adding a type of deep learning called an "adversarial network," the algorithm can compare the differences between the original and modified photo and apply changes naturally. That means that an artist can completely modify someone's looks in a random photo with just a few brush strokes.

If you're code savvy, you can download the Neural Photo Editor from Github and try it for yourself. Just be aware that it's still in the early stages, and only works on very low-resolution images. As shown in the video above, it doesn't work at all sometimes and generates "bizarro" results, as Brock puts it. Still, it's a pretty good preview of future image editing software and shows how even Photoshop artists must fear the robot revolution.