Over the past several years, Convolutional Neural Networks (CNNs) have established themselves as a state-of-the-art computer vision tool both in industry and academia. Being used in applications ranging from facial recognition to self-driving cars, they have become incredibly popular for deep learning developers. In my work at Galaxy.AI, I’ve implemented CNNs for some of the more “traditional” computer vision tasks such as image classification and object localization.

In addition to these sorts of tasks, however, CNNs have been shown to be particularly good at recognizing artistic style. Specifically, in this paper from 2015, the authors discuss how deep convolutional neural networks can distinguish between “content” and “style” in images. By writing separate loss functions for each, the authors demonstrate how CNNs can combine the style from one image with the content from other, to create new, visually appealing images. One impressive aspect of this technique is that no new network training is required — pre-trained weights such as from ImageNet work quite well.

Style transfer is a fun and interesting way to showcase the capabilities of neural networks. I wanted to take a stab at creating a bare-bones working example using the popular python library, keras . In this post I’ll walk you through my approach, mimicking as closely as possible the methods from the paper. The full code from this post can be found at https://github.com/walid0925/AI_Artistry .

Using only two base images at a time, we’ll be able to create AI artwork that looks something like this: