One video gen­er­at­ed using this technique.

10. June 2018 Abstract Art with ML Randomly initialised neural networks are able to produce visual appealing images. In this post, we explore what compositional pattern producing networks are, how they can be parameterised and what kind of images can be obtained using this technique.

cppn,

art,

generation

Did you know that ran­dom­ly ini­tialised neur­al net­works actu­al­ly pro­duce pret­ty cool pictures?

Well, I did­n’t, until I recent­ly dis­cov­ered pat­tern pro­duc­ing net­works. This post will be all about them, and will con­tribute a pat­tern gen­er­a­tor run­ning in your brows­er, so you can expe­ri­ence these net­works for your­self. But first, let me explain.

Com­po­si­tion­al Pat­tern Pro­duc­ing Networks

A com­po­si­tion­al pat­tern pro­duc­ing net­work , or CPPN in short, is a net­work, in this case we will focus main­ly on neur­al nets, that giv­en some para­me­ters pro­duces a visu­al pat­tern. The main idea here is to slide the net­work over all the x‑y-coor­di­nates in an image and pro­duce a 3‑channel (rgb) col­or out­put for each pix­el in the out­put image. We can look at this net­work f f f in the fol­low­ing way: ( r g b ) = f ( x y ) \begin{pmatrix}r\g\b\end{pmatrix} = f\begin{pmatrix}x\y\end{pmatrix} ⎝⎛​rgb​⎠⎞​=f(xy​)

This net­work, since it’s con­tin­u­ous and dif­fer­en­tiable, out­puts local­ly cor­re­lat­ed val­ues, mean­ing, sam­pling the net­work on two dif­fer­ent very close points will lead to a very sim­il­iar out­put val­ue. This basi­cal­ly results in the prop­er­ty that the image we can gen­er­ate from it, we could call smooth.

Anoth­er cool prop­er­ty this has, is that it has ​“infi­nite” res­o­lu­tion, because you can just scale the coor­di­nates the net­work receives as inputs.

One exam­ple of a com­po­si­tion­al pat­tern pro­duc­ing net­work. Using a sim­ple 3‑layer net­work with tanh as the acti­va­tion function.

Para­me­ters

Now we could sim­ply run the net­work as it is. And this in fact works. But we take this a lit­tle step fur­ther by adding cer­tain oth­er inputs to the net­work with the aim of hav­ing the net­work gen­er­ate more com­plex images.

For exam­ple we can add the radius r r r and an adjustable para­me­ter α \alpha α. With these mod­i­fi­ca­tions our net­work f f f looks like this ( r g b ) = f ( x y r α ) \begin{pmatrix}r\g\b\end{pmatrix} = f\begin{pmatrix}x\y\r\\alpha\end{pmatrix} ⎝⎛​rgb​⎠⎞​=f⎝⎜⎜⎛​xyrα​⎠⎟⎟⎞​ with r = x 2 + y 2 r = \sqrt{x^2 + y^2} r=x2+y2 ​. The radius not only pro­vides a nice non-lin­ear­i­ty but it also enables the net­work to cor­re­late the out­put col­or to the dis­tance to the ori­gin, because points on the same cir­cum­fer­ence receive the same val­ues for r r r.

While the radius r r r changes with x x x and y y y, the α \alpha α para­me­ter is sta­t­ic over the course of the image. In essence, you can think of this para­me­ter as the z‑parameter. When sam­pling from the 3‑dimensional (x y z) cube we look at the slice at posi­tion z = α z = \alpha z=α.

You can get very cre­ative with these para­me­ters and we’ll explore more exot­ic con­fig­u­ra­tions lat­er on.

What about a 9‑layer DenseNet? well see for yourself.

Ani­mat­ing along α \alpha α.

Ini­tial­i­sa­tion

The out­put of a neur­al net­work is defined a) by its inputs which we talked about in the last sec­tion and b) by its weights. The weights there­fore play a cru­cial role in how the net­work behaves and thus what the out­put image will look like.

In the exam­ple images through­out this post, I main­ly sam­pled the weights W W W from a Gauss­ian dis­tri­b­u­tion ( N \mathcal{N} N), with a mean of zero and with a vari­ance depen­dent on the num­ber of input neu­rons and a para­me­ter β \beta β which I could adjust to my taste. W ( N i n , β ) ∼ N ( μ = 0 , σ = β 1 N i n ) W(N{in}, \beta) \sim \mathcal{N}(\mu = 0, \sigma = \beta \frac{1}{N{in}}) W(Nin​​‚β)∼N(μ=0,σ=βNin​1​)

We can also ask the net­work to just spit a sin­gle val­ue out, and inter­pret that as a black and white image.

Now to the fun part. Here is a pro­gres­sive image gen­er­a­tor based on the prin­ci­ple of CPP­Ns. You can adjust the Z‑value, the vari­ance (which equals β \beta β in the above descrip­tion), choose if you want a black and white image (B/W) and explore the seed space. Note, that the ran­dom num­ber gen­er­a­tor used for the Gauss­ian dis­tri­b­u­tion will always pro­duce the same val­ues for the same seed, which is cool, because you can share the seed and retrieve the same result.

I com­piled a list of exam­ple seeds that I found quite com­pelling while I played with the tool. Try these seeds for the start and then explore the space yourself! :)

"Complexity" 6130076054 "Streams" 2321123 "Lines" 9183923745 "Alice's purple rabbit" 6828398570 "The Storm" 3742851 "Prism" 4397 "Gradient" 3742849 "Excess" 2321175 "Stained" 6828398584 Figure 2: example seeds. Click on an image to process it in the generator.

And these are just some exam­ples. Try chang­ing the seed, the time or the vari­ance your­self and maybe, just maybe, you come across some masterpiece! 😛

Explor­ing oth­er architectures

In this sec­tion, I want to show you some results I’ve been get­ting with some more exot­ic architectures.

"The Swirl" "Nebula" Parameterising images on β = cos ⁡ ( 10 r ) \beta = \cos(10r) β=cos(10r)

As you can see, these images behave sur­pris­ing­ly very dif­fer­ent­ly. A sin­gle para­me­ter more makes a huge dif­fer­ence in the acti­va­tions of the networks.

"Reactor" "Warp" Symmetry using f ( x 2 , y 2 , α ) f(x^2, y^2, \alpha) f(x2,y2,α)

Wrap­ping up

Now that we have gone through sev­er­al archi­tec­tures, explored mul­ti­ple con­fig­u­ra­tions and looked at a bunch of images, it’s time to wrap up. But I want to wrap up by point­ing to some pos­si­ble improvements.

Train these net­works using back­prop­a­ga­tion. That’s the obvi­ous one. What’s not obvi­ous, is on what to train it.

As a fol­low up to train­ing: one way to super­charge this method of cre­at­ing art would be to incor­po­rate human feed­back e.g. using adver­sar­i­al net­works. For instance, humans could be shown two images which they have to choose the one they pre­fer from. Then, an adver­sar­i­al net­work could learn the prob­a­bil­i­ty that a giv­en image is cho­sen by a human, and use the gra­di­ent of this adver­sar­i­al net­work for back­prop­a­ga­tion on the gen­er­a­tor network.

Also, there are a lot of things I did­n’t try. There­fore, one could explore even more of the archi­tec­ture and para­me­ter space. Chang­ing bias ini­tial­i­sa­tion, ker­nel ini­tial­i­sa­tion, using dif­fer­ent net­work topolo­gies, or even try­ing dif­fer­ent col­or spaces.

In terms of use case: these images make great col­or and gra­di­ent inspi­ra­tion! Also, I just recent­ly replaced my Spo­ti­fy playlist cov­ers with these. They make pret­ty great album art­works (don’t they?). (Flume, hit me up ;-) )

Any­way, that’s it for now. I hope you have enjoyed my first blog post. If you did, I’d appre­ci­ate if you con­sid­er sub­scrib­ing to my blog (via RSS or JSON-Feed). I’ll try to pub­lish blog posts reg­u­lary here. See you then!