Analysis Google-stablemate DeepMind thinks it is one step closer to cracking artificial general intelligence with an algorithm that helps machines overcome memory loss.

AI is the hottest trend in technology right now – it’s on its way to reaching the peak of the hype cycle. There's a lot of ballyhoo behind all those headlines you've seen predicting a world dominated by intelligent systems. Yet today's systems can only perform well on specific tasks they’re trained on, such as playing Go, identifying images, or recognizing speech. No machine is intelligent enough to do all those things at once, because as soon as they learn a new task, previous skills are quickly forgotten.

Researchers call this “catastrophic forgetting,” and it’s a key hurdle to overcome if machines are to develop artificial general intelligence.

A team of researchers from DeepMind has partnered with a neuroscientist from Imperial College London to create an algorithm inspired by how the mammalian brain retains memories.

Results published in a paper on Arxiv show the algorithm can be used to train a system to play several Atari 2600 games and recognize a dataset of handwritten numbers.

Vital knowledge is retained by reducing the plasticity of the brain’s synapses, making it harder for information to be overwritten or forgotten. In algorithms, however, researchers have to keep the parameters from changing.

These parameters are variables that allow an algorithm to closely model a particular dataset or perform a specific function. For example, an algorithm could predict housing prices if the size, location or other properties like the number of rooms are correctly modeled as parameters.

The importance of each parameter can be defined by assigning weights to each variable. In deep learning, every time a system learns to perform a new task, the weights of the different parameters have to be adjusted.

The algorithm, fancily named “elastic weight consolidation” (EWC), constrains the weight of important parameters that are vital for solving multiple tasks during the learning process. It can “be imagined as a spring anchoring the parameters to the previous solution, hence the name elastic,” the paper said.

A neural network is trained to scope out the important parameters that are common for different tasks, and the EWC algorithm is applied so it doesn’t forget those parameters as it learns to adjust to new tasks.

A modest improvement

The researchers tested their algorithm on the MNIST database, which contains handwritten digits commonly used to train machines to process images and recognize patterns.

The team shuffled the pixels of each image so the machine had to classify the numbers in a different image for each task. Conventional methods of training would lead to catastrophic forgetting, but the EWC algorithm shows that as it learns to identify the numbers on a new image, it won't forget the old one.

DeepMind is known for its work in deep reinforcement learning – where agents are trained to play games at human level or better – so it was no surprise that researchers decided to try their algorithm out on the classic Atari 2600 games.

Previous work – such as DeepMind’s transfer learning – meant separate networks were initially trained on individual tasks before the knowledge would be pooled into one network.

Transfer learning gets increasingly difficult as more networks are added. With the EWC algorithm, however, only a single neural network was to train a system to play the Atari 2600 games.

A more complex neural network is needed to cope with the more difficult job of playing the Atari games than identifying handwritten numbers. The researchers included biases that were specific to each game, so it could play them more effectively.

A short-term memory was used as a “replay mechanism” so the network could learn from past experiences, and a long-term memory was based on the EWC algorithm.

It resulted in a “modest improvement” over a similar “forget-me-not” algorithm, but it should be noted that it isn’t as effective as playing the games with ten separate neural networks. The algorithm is still primitive and it’s “likely” that it has underestimated the importance of other parameters that are needed to complete different tasks.

Although the researchers are far from reaching artificial general intelligence, it is interesting to see that DeepMind is chasing that goal with the help of neurology. “...current neurobiological theories concerning synaptic consolidation do indeed scale to large-scale learning systems. This provides prima facie evidence that these principles may be fundamental aspects of learning and memory in the brain,” the paper concluded.

The ability to learn new things without forgetting is vital if machines are to be able to apply their knowledge to new environments. DeepMind hasn't only been playing with memory algorithms, but has also flirted with the idea of using a differentiable neural computer that has an external memory to solve problems. ®