— Billy Wright

HUMANS ALLURED by the dreams of an android. Or at least, by the visions of image-recognition software with an artificial neurology.

They’ve been shared rampant across the internet, you may have noticed: photographs of once people now becoloured with a psychedelic rash, their eyebrows shorn but with new eyes sprouting from the mouth, the cheeks and neck, ringlets of skin peeling from the face, hemmed into a liquefied background lost for shape but for the phantom Chihuahua heads that emerge like angles from the trees, the sun, the clouds.

Surreal, frightening and indeed quite phenomenal. The artist behind it all is Google’s ‘Deep Dream’. Cool enough to be given a name, Deep Dream is somewhat of a by-product from Google’s research into image recognition software; an endeavour spurred forward by recent advances in artificial neural networks – systems of coding inspired by the neurology of a real life brain.

Deep Dream’s ‘dreams’ are the result of asking image recognition software to search for and accentuate objects in a picture that are not actually there. Buildings in a mountain scape, for example, or puppies in the sky.

Google has referred to the effect as ‘inceptionism’.

It started with a blog post in June. A team of Google researchers discuss the significance of inceptionism, as well as exhibiting the first of these strange images. According to Google, this was much to the arousal of readers, with keen interest from programmers and artists alike.

So, at the beginning of July, the coding for the software was made public, giving birth to Deep Dream.

Since then all over the place we’ve seen the familiar dreamlike wash of this primal step in artificial intelligence. Famous icons and landmarks, pictures of you and your friends, all warped and refigured. The faces of celebrities peel away to expose diseased swamp-things living beneath. A plate of food looks like some growth stripped from the walls of hell. And I buckle under the pressure of the many eyes my skin was apparently pregnant with.

Moving image is also vulnerable: appropriately, someone gave Fear and Loathing in Las Vegas the Deep Dream treatment.

It’s entertaining as a novelty. And certainly there is artistic merit here too. The images are as striking as they are mysterious.

But for the scientists, the pictures from Deep Dream serve an important function. Its ‘dreams’ shed insight into the sort of details that image recognition software focuses on, where and how it looks for things, and what its visual understanding of a certain object is.

Complex pieces of software like this are taught to visually recognise things by first being shown millions of training examples: pictures of bananas, wine glasses, snails, all in their natural context, as well as more abstract articles such as ‘running,’ ‘playing’ or ‘anger’.

Behind the scenes is an artificial neural network – intricate layers of programming inspired by the networks of a real brain and capable of rudimentary ‘thought’. (Recently, one of them learnt how to play and master video games).

Inside these networks, layers of interconnected ‘neurons’ send messages to each other through ‘synapses’, just as the cells inside our own brain do.

An image recognition network, like the parent of Deep Dream, is typically made up of 10-30 of these layers. When a network is shown a picture, an initial ‘input’ layer processes it first. Then the image is passed up and through the layers until a final ‘output’ layer of neurons is reached. From here, you get the network’s ‘answer’.

Humans, and many other see-ers, go about recognising images in a similar way.

Light bounces from an object and on to the photoreceptors in our eyes, which send electrical signals to the occipital lobe (the vision centre) at the back of the brain. From there, just as the machines do, initial layers of neurons interpret the raw and basic components of an image such as brightness and colour. These elements compose things like contrast and shading, allowing further layers of neurons to work out spatial features like depth, size and shape. Familiar patterns emerge, such as a flower or a window. The final layers assemble these components into complete and meaningful interpretations: a colourful garden, a large old house.

As things stand, humans are better at this than the robots. Though they are catching up, current software is still prone to errors that seem silly to us. And anything too complex exacerbates this.

This is where the dreams of Deep Dream come in useful. If the recognition process is turned the other way around to instead generate images, researchers can get an idea of what’s going on inside the mind of one their artificial networks.

Google blogged about an experiment in which they asked image-recognition networks to search for particular objects inside images of random static. For instance, a banana. The software honed in and focused on anything that looked remotely like a feature of a banana; trace splashes of line or colour consistent with its mental understand of what that object should look like. What it found, it was asked to enhance, gradually making minuscule banana similarities more and more banana-ry. The end result is a mental projection of all things banana, representing what the network understands this object to be.

Trialling different objects leads researchers to pick up on any misunderstandings a network may have. For example, one network was asked to visualise a dumbbell and it appeared that no image was complete without a muscular arm hanging on. During its ‘training’, the network must have seen too many pictures of dumbbells coupled with weightlifters, and so failed to properly distil the visual essence of a dumbbell.

In some of the trials, the researchers might not ask a network to look for anything specific. Rather, it can be left to make the decision for itself.

In pictures of people, animals, scenery, or anything, the network searches the details for trace features of anything it can think of. It does this just as you or I might lie on our back and make shapes out of the clouds.

What it finds, it enhances.

Because each neuron layer in a network deals with a different level of abstraction, images generated from this process will be different depending on which layer they’re taken from. Earlier layers accentuate lines and patterns of colour, besetting an image with ornate strokes and arches. Higher order layers obsess over the small details, pinching and ruffling negative space until familiar objects materialise out of nowhere.

Google’s Deep Dream was mostly trained with pictures of animals. So naturally, animals are what it sees.

It’s clever and impressive research. And a notable achievement in art: Deep Dream is an artificial creator, with the beginnings of an artificial imagination.

Below are some of Google’s images generated by Deep Dream out of a “feedback loop”—whatever you see, make more of it!

See more at Google’s Inceptionism gallery…

This website asks you to make an account first, but seems to be one of the better hosts out there if you feel like having a go yourself.