An artist's impression of a Differentiable Neural Computer DeepMind

Google's DeepMind has unveiled a new "computer" that combines both data processing with self-learning code – and the firm tested it using the London Underground.

This new algorithm is able to retain information in its memory and use its learnings to solve problems in related areas.


In particular, computer scientists at DeepMind, purchased by Google for £400 million, trained their new neural network to find its way around the Tube in the fastest possible way.

"You can present it with a London Underground map, it can store that map and it can use it from then on [in similar situations] if it needs to," Alex Graves, the lead author of the research, told WIRED. Navigating the Tube efficiently isn't new, but the AI's retention of its methods and reasoning behind the route it would take is.

Read next This CIA spy game reveals the secrets of successful teams This CIA spy game reveals the secrets of successful teams

Graves, who completed the work with 19 other DeepMind researchers, says the neural network is able to retain what it has learnt from the London Underground map and apply it to another, similar, situation. The Paris Metro, for example.


"It would be able to operate on something completely new that it hadn't seen before. It's an important type of memory that has been lacking in neural networks," he explained.

Traditional neural networks are able to learn everything needed to find the fastest way through a transport system but they would have to be "fed" the data multiple times.

"You can't give normal neural networks a piece of information and let them keep it indefinitely in their internal state – at some point it will be overwritten and they will essentially forget it," Graves said. The information in the neural network's memory could be kept "indefinitely".

Read next Covid-19 has shown how easy it is to automate white-collar work Covid-19 has shown how easy it is to automate white-collar work

In order to see this embed, you must give consent to Social Media cookies. Open my cookie preferences.


Dubbed a "differentiable neural computer," the development has been published in the journal Nature. A separate example saw the computer scientists input the details of a family tree and the AI answer whether a person was the 'aunt,' 'father,' or other relation of another.

"Taken together, our results demonstrate that DNCs have the capacity to solve complex, structured tasks that are inaccessible to neural networks without external read–write memory," the authors, which include Deepmind's CEO and co-founder Demis Hassabis, wrote in the paper.

"Like a conventional computer, it can use its memory to represent and manipulate complex data structures, but, like a neural network, it can learn to do so from data".

Essentially, the neural network is being trained by a previous experience to solve a familiar problem. In an opinion article published alongside the research, Herbert Jaeger, a Jacobs University Bremen computer scientist, said the AI's memory allows it to reason.


Google's DeepMind trains AI to cut its energy bills by 40% DeepMind Google's DeepMind trains AI to cut its energy bills by 40%

"Graves and colleagues demonstrate the capabilities of their system by putting it through several tasks that require rational reasoning, such as planning a multi-stage journey using public transport," Jaeger wrote. Until now, he said, computers have only been able to achieve this by having a specific program written for them.

The research has implications for big data if it can be scaled to handle the volume of large databases. "A flexible, extensible DNC-style working memory might allow deep learning to expand into big-data applications that have a rational reasoning component, such as generating video commentaries or semantic text analysis," Jaeger speculated.