Google calls this technique ' Inceptionism ' - and already uses system in its image search

Researchers also asked system what it though dumbbells looked like - resulting in a human arm attached to one

Feedback loops and over analysis caused unusual object to be spotted - such as animals in clouds


Google has revealed what its most advanced artificial systems dream of - and it can be terrifying.

The firm has revealed a stunning set of images to help explain how its systems learn over time.

It shows how the system learns - and what happens when it gets things wrong.

The images were created by feeding a picture into the neural network, and asking emphasise feature it recognised - in this case, animals.

HOW THEY DID IT The images were created by feeding a picture into the network, and then asking it to recognise a feature of it, and modify the picture to emphasise the feature it recognises - in this case, animals. That modified picture is then fed back into the network, which is again tasked to recognise features and emphasise them, and so on. Eventually, the feedback loop modifies the picture beyond all recognition. Advertisement

'Artificial Neural Networks have spurred remarkable recent progress in image classification and speech recognition,' wrote Alexander Mordvintsev, , Christopher Olah and Mike Tyka of Google's AI team.

'But even though these are very useful tools based on well-known mathematical methods, we actually understand surprisingly little of why certain models work and others don't.'

Google trains an artificial neural network by showing it millions of training examples and gradually adjusting the network parameters until it gives the classifications the team want.

The team has even given the images a name - Inceptionism .

The network typically consists of 10-30 stacked layers of artificial neurons.

Each image is fed into the input layer, which then talks to the next layer, until eventually the 'output' layer is reached.

The network's 'answer' comes from this final output layer.

In doing this, the software builds up a idea of what it thinks an object looked like.

Researchers used this to ask the software to create a dumbell, for instance.

What came back (below) was a strange image showing arm arm attached to a dumbell.

'There are dumbbells in there alright, but it seems no picture of a dumbbell is complete without a muscular weightlifter there to lift them.

The researchers also asked the system to analyse Edvard Munch's The Scream - which was turned into a portrait of a dog

In this case, the network failed to completely distill the essence of a dumbbell.

'Maybe it's never been shown a dumbbell without an arm holding it.

'Visualization can help us correct these kinds of training mishaps.'

'Why is this important?,' the team wrote.

'here's what one neural net we designed thought dumbbells looked like' said the researchers

The system has tried to learn to recognise animals - and spotted strange animals in unexpected places as a result.

'Well, we train networks by simply showing them many examples of what we want them to learn, hoping they extract the essence of the matter at hand (e.g., a fork needs a handle and 2-4 tines), and learn to ignore what doesn't matter (a fork can be any shape, size, color or orientation).

'But how do you check that the network has correctly learned the right features? It can help to visualize the network's representation of a fork.'

The animals were all spotted in a seemingly simple picture of a cloud when analysed by Google's AI several times, creating a kind of feedback loop and amplifying what the network knew best - in this case, animals.

The cloud image used to create the animals above

The team can even program the AI to try and spot whole object - with hilarious results.

'If we choose higher-level layers, which identify more sophisticated features in images, complex features or even whole objects tend to emerge.

'Again, we just start with an existing image and give it to our neural net.

'We ask the network: 'Whatever you see there, I want more of it!'

The team found this creates a feedback loop.

If a cloud looks a little bit like a bird, the network will make it look more like a bird.

This in turn will make the network recognize the bird even more strongly on the next pass and so forth, until a highly detailed bird appears, seemingly out of nowhere.

'The results are intriguing—even a relatively simple neural network can be used to over-interpret an image, just like as children we enjoyed watching clouds and interpreting the random shapes.

'This network was trained mostly on images of animals, so naturally it tends to interpret shapes as animals.

'But because the data is stored at such a high abstraction, the results are an interesting remix of these learned features.'

Some of the other amazing images created by the system.

Another landscape painting fed into the system - with bizarre results

The team also tried putting different kinds of pictures into the system.

'Horizon lines tend to get filled with towers and pagodas. Rocks and trees turn into buildings. Birds and insects appear in images of leaves.'

The researchers say some of the workl could even be art.

'This work also makes us wonder whether neural networks could become a tool for artists—a new way to remix visual concepts—or perhaps even shed a little light on the roots of the creative process in general.