A figure in Ilyas, et. al. that struck me as particularly interesting was the following graph showing a correlation between adversarial transferability between architectures and their tendency to learn similar non-robust features.

Adversarial transferability vs test accuracy of different architectures trained on ResNet-50′s non-robust features.

One way to interpret this graph is that it shows how well a particular architecture is able to capture non-robust features in an image. Since the non-robust features are defined by the non-robust features ResNet-50 captures, N R F r e s n e t NRF_{resnet} NRFresnet​, what this graph really shows is how well an architecture captures N R F r e s n e t NRF_{resnet} NRFresnet​.

Notice how far back VGG is compared to the other models.

In the unrelated field of neural style transfer , VGG-based neural networks are also quite special since non-VGG architectures are known to not work very well This phenomenon is discussed at length in this Reddit thread. without some sort of parameterization trick . The above interpretation of the graph provides an alternative explanation for this phenomenon. Since VGG is unable to capture non-robust features as well as other architectures, the outputs for style transfer actually look more correct to humans! To follow this argument, note that the perceptual losses used in neural style transfer are dependent on matching features learned by a separately trained image classifier. If these learned features don’t make sense to humans (non-robust features), the outputs for neural style transfer won’t make sense either.

Before proceeding, let’s quickly discuss the results obtained by Mordvintsev, et. al. in Differentiable Image Parameterizations, where they show that non-VGG architectures can work for style transfer by using a simple technique previously established in feature visualization . In their experiment, instead of optimizing the output image in RGB space, they optimize it in Fourier space, and run the image through a series of transformations (e.g jitter, rotation, scaling) before passing it through the neural network.

Can we reconcile this result with our hypothesis linking neural style transfer and non-robust features?

One possible theory is that all of these image transformations weaken or even destroy non-robust features. Since the optimization can no longer reliably manipulate non-robust features to bring down the loss, it is forced to use robust features instead, which are presumably more resistant to the applied image transformations (a rotated and jittered flappy ear still looks like a flappy ear).

A quick experiment

Testing our hypothesis is fairly straightforward: Use an adversarially robust classifier for neural style transfer and see what happens.

I evaluated a regularly trained (non-robust) ResNet-50 with a robustly trained ResNet-50 from Engstrom, et. al. on their performance on neural style transfer . For comparison, I performed the same algorithm with a regular VGG-19 .

To ensure a fair comparison despite the different networks having different optimal hyperparameters, I performed a small grid search for each image and manually picked the best output per network. Further details can be read in a footnote L-BFGS was used for optimization as it showed faster convergence over Adam. For ResNet-50, the style layers used were the ReLu outputs after each of the 4 residual blocks, [ r e l u 2 _ x , r e l u 3 _ x , r e l u 4 _ x , r e l u 5 _ x ] [relu2\_x, relu3\_x, relu4\_x, relu5\_x] [relu2_x,relu3_x,relu4_x,relu5_x] while the content layer used was r e l u 4 _ x relu4\_x relu4_x. For VGG-19, style layers [ r e l u 1 _ 1 , r e l u 2 _ 1 , r e l u 3 _ 1 , r e l u 4 _ 1 , r e l u 5 _ 1 ] [relu1\_1,relu2\_1,relu3\_1,relu4\_1,relu5\_1] [relu1_1,relu2_1,relu3_1,relu4_1,relu5_1] were used with a content layer r e l u 4 _ 2 relu4\_2 relu4_2. In VGG-19, max pooling layers were replaced with avg pooling layers, as stated in Gatys, et. al . or observed in the accompanying Colaboratory notebook.

The results of this experiment can be explored in the diagram below.

Content image Style image Compare VGG or Robust ResNet

Success! The robust ResNet shows drastic improvement over the regular ResNet. Remember, all we did was switch the ResNet’s weights, the rest of the code for performing style transfer is exactly the same!

A more interesting comparison can be done between VGG-19 and the robust ResNet. At first glance, the robust ResNet’s outputs seem on par with VGG-19. Looking closer, however, the ResNet’s outputs seem slightly noisier and exhibit some artifacts This is more obvious when the output image is initialized not with the content image, but with Gaussian noise. .

Texture synthesized with VGG.

Mild artifacts. Texture synthesized with robust ResNet.

Severe artifacts.

A comparison of artifacts between textures synthesized by VGG and ResNet. Interact by hovering around the images. This diagram was repurposed from Deconvolution and Checkerboard Artifacts by Odena, et. al.

It is currently unclear exactly what causes these artifacts. One theory is that they are checkerboard artifacts caused by non-divisible kernel size and stride in the convolution layers. They could also be artifacts caused by the presence of max pooling layers in ResNet. An interesting implication is that these artifacts, while problematic, seem orthogonal to the problem that adversarial robustness solves in neural style transfer.

VGG remains a mystery

Although this experiment started because of an observation about a special characteristic of VGG nets, it did not provide an explanation for this phenomenon. Indeed, if we are to accept the theory that adversarial robustness is the reason VGG works out of the box with neural style transfer, surely we’d find some indication in existing literature that VGG is naturally more robust than other architectures.

A few papers indeed show that VGG architectures are slightly more robust than ResNet. However, they also show that AlexNet , not known to work well for neural style transfer As shown by Dávid Komorowicz in this blog post. , is above VGG in terms of this “natural robustness”.

Perhaps adversarial robustness just happens to incidentally fix or cover up the true reason non-VGG architectures fail at style transfer (or other similar algorithms In fact, neural style transfer is not the only pretrained classifier-based iterative image optimization technique that magically works better with adversarial robustness. In Engstrom, et. al. , they show that feature visualization via activation maximization works on robust classifiers without enforcing any priors or regularization (e.g. image transformations and decorrelated parameterization) used by previous work . In a recent chat with Chris Olah, he pointed out that the aforementioned feature visualization techniques actually work well on VGG without these priors, just like style transfer! ) i.e. adversarial robustness is a sufficient but unnecessary condition for good style transfer. Whatever the reason, I believe that further examination of VGG is a very interesting direction for future work.