Google's DeepMind has beefed up machine learning capability by coupling a neural network with external memory, using it to find the shortest path between stations on the London underground.

Neural networks - a system modelled on how neurons are connected and work in the brain - are good at processing data but bad at taking on more algorithms to tackle more tasks because of a lack of memory.

The researchers from DeepMind, however, have taken steps to solve this problem by creating a differentiable neural computer (DNC).

Results published in a paper in Nature show that a DNC can read and write from an external memory and outperforms DeepMind’s neural Turing machine, a system with short-term memory.

The information is stored in a memory matrix, and is operated on by read and write functions.

The interaction between the read and write functions with the memory gives the DNC an associative memory. Data going into the neural network is read and the information can be stored by writing it to a location in the memory matrix.

How the DNC reads and writes memory to get right output. Photo credit: Nature and Graves et al

All the information written to the memory matrix is assigned a weighting to see how related it is to other strands of information stored in other locations, which gives a handy way for the machine to remember the order in which the information was stored.

This associative nature allows the neural network to find the right information to read so it can produce the answer or output information.

Since the DNC also remembers where it has stored information, it can write new information to locations that are free or rewrite over memory that is no longer required. The system can then retain the information it has learned, and doesn’t need to be retrained if developers want to add more space to the memory matrix.

London Underground challenge

DeepMind’s DNC is especially adapted to processing information that can be presented as graph data, such as “parse trees, social networks, knowledge graphs and molecular structures”.

A team of 20 researchers lead by Alex Graves decided to apply their DNC to a small region on the map of the London Underground.

To transform the map to data, each station is represented as a node with the connecting lines as edges. The DNC was asked to find the best route - one that had the fewest stops rather than the shortest time - between two stations.

The relation between stations was not taught, but instead had to be inferred from the memory of the map. Journeys which required seven-step traversals were accurately given to 99.8 per cent.

A similar challenge was set up by the researchers by creating a false family tree. The DNC was then asked questions such as "who is Freya’s maternal uncle", testing the machine’s ability to find relations between information stored in its memory. This was a slightly trickier task and the DNC gave the right answer to 81.8 per cent of the queries.

Inferring from graph data. Photo credit: Nature and Graves et al

Although the DNC function was designed with “computational considerations” in mind, DeepMind noted the striking similarities it has with the human brain’s hippo-campus.

Examining how to replicate in machines the behaviour found in biological brains behave is a particular interest for DeepMind’s co-founder Demis Hassabis.

“Human ‘free recall’ experiments demonstrate the increased probability of item recall in the same order as the first presented - a hippocampus-dependent phenomenon”. This resembles DNC recalling memories by finding associations between the information stored.

DNCs won't help commuters yet, as the size of the memory matrix has to be scaled up massively for it calculate the best routes for longer journeys. But it is a step towards allowing machines to make decisions through reasoning.

“To tackle real-world data we will need to scale up to thousands or millions of locations,” the paper said. Google DeepMind [is] looking to use DNCs as “representational machines for one-shot learning”, which will help machines with “scene understanding, language processing, and cognitive mapping”. ®