In Robbie’s words:

I trained the network on the skulls. They are all the same shape, the same size, the same orientation, and they are all looking the same way. The results were good, but they were very similar to Ronan’s original skulls. We have the show chopped up into different epochs, and that is Epoch One, training directly on his skulls.

For Epoch Two I thought about how the coolest part about using GANs is that your getting a weird machine viewpoint of artwork. But feeding in all the skulls with the same layout is sort of like you are telling the machine how to look at the paintings. You’re giving it a very fixed perspective and a very normal perspective that we have already seen before.

So for Epoch Two, I basically played around with feeding the machine the skulls completely independent of any rotation or perspective, so the machine sees skulls that are all flipped around and stretched out. I’m using the same model, but the number of skulls in the training set jumped from 500 to 17,000 skulls. And the results are really, really good. It makes these really strange images that you would never expect. You can tell that they are skulls, but they really are not familiar. Ronan really loves those. He really likes to correct some of the skulls. He’ll say something like, ‘I like this one but it’s not right,’ or ‘There is never an image I am completely satisfied with,’ so he corrects it. He also does interpretations of them.



I also think that the Epoch Two skulls raise very interesting questions about authorship - since the network has learned exclusively from Ronan, but the outputs don't strongly resemble his work.