The Neural Turing Machine will combine the best of number-crunching with the human-like adaptability of neural networks – so it can invent its own programs

Human touch needed, for now (Image: Joe Raedel/Getty Images)

YOUR smartphone is amazing, but ask it to do something it doesn’t have an app for and it just sits there. Without programmers to write apps, computers are useless.

That could soon change. DeepMind Technologies, a London-based artificial-intelligence firm acquired by Google this year, has revealed that it is designing computers that combine the way ordinary computers work with the way the human brain works. They call this hybrid device a Neural Turing Machine. The hope is it won’t need programmers, and will instead program itself.

Neural networks, which make up half of DeepMind’s computer architecture, have been around for decades but are receiving renewed attention as more powerful computers take advantage of them. The idea is to split processing across a network of artificial “neurons”, simple units that process an input and pass it on. These networks are good at learning to recognise pieces of data and classify them into categories. Facebook recently trained a neural network to identify faces with near-human accuracy (read more about how computers are learning to see, on page 24).


Facebook recently trained a neural network to identify faces with near-human accuracy

While that’s impressive, the flip side is that neural networks struggle with basic computational tasks such as copying and storing data. “These neural networks that are so good at recognising patterns – a traditional domain for humans – are not so good at doing the stuff your calculator has done for a long time,” says Jürgen Schmidhuber of the Dalle Molle Institute for Artificial Intelligence Research in Manno, Switzerland.

Bridging that gap could give you a computer that does both, and can therefore invent programs for situations it has not seen before. The ultimate goal is a machine with the number-crunching power of a conventional computer that can also learn and adapt like a human.

DeepMind’s solution is to add a large external memory that can be accessed in many different ways, which mathematician Alan Turing realised was a key part of ordinary computing architecture, hence the name Neural Turing Machine (NTM). This gives the neural network something like a human’s working memory – the ability to quickly store and manipulate a piece of data.

To test the idea, they asked their NTM to learn how to copy blocks of binary data it received as input, and compared its performance with a more basic neural network. The NTM learned much faster, and could reproduce longer blocks with fewer errors. Results were similar for experiments on remembering and sorting lists of data. When the team studied what the NTM was doing, they found its methods closely matched the code that a human programmer would have written (arxiv.org/abs/1410.5401). These tasks are extremely basic, but essential if such machines are to create sophisticated software.

Other researchers at Google are also trying to teach computers to learn more complex processes. One team recently published details of a neural network that is capable of learning to read simple code and execute it without first being taught the necessary programming language, a bit like successfully adding two numbers without knowing what addition or numbers actually are (arxiv.org/abs/1410.4615).

The mixed architecture used by DeepMind seems sensible, says Chris Eliasmith at the University of Waterloo, Canada. “As humans we classify but we also manipulate the classification,” he says. “If you want to build a computer that is cognitive in the way that we are, it is going to require this kind of control.”

There are several reasons why this research area is so hot right now. “Digital computers are basically hitting a wall,” says Eliasmith. It seems like Moore’s law – the trend for microchips to double in capacity every two years – is ending, he says.

And then there are growing concerns over energy efficiency. Conventional computers that could match the human brain, if such a thing could be made, would need multiple full-scale power plants. Firms like IBM are already building neuron-inspired hardware with much lower power requirements, and that means software must also adapt. “If we want to use our most efficient hardware, we have to express our algorithms in a manner which fits,” Eliasmith says.

This article appeared in print under the headline “Ditch the programmers”