The prospect that artificial intelligence (AI) might one day surpass human intelligence is one that many people, including a number of notable personalities, are terrified of. And it’s not hard to see where that fear is coming from.

As it is, deep learning machines have already shown a number of ways where they outperform humans. So far, they can play video games, recognize faces, and even do stock market trading. There’s one area, though, where humans are still superior, and that’s the speed at which we learn.

Right now, humans learn at a rate that’s 10 times faster than that of a deep learning machine. And it is this ‘superiority’ that has kept that ‘AI taking over humans’ apocalyptic view in the background. Thanks (or no thanks?) to Google, however, this status quo is about to change.

According to Alexander Pritzel of Google’s DeepMind subsidiary in London, they have built a deep-learning machine that can learn as quickly as a human. Aside from that, this AI can supposedly understand and act on new experiences faster, which will allow it to reach human-level learning speeds sooner rather than later.

Deep learning works by using layers of neural networks to identify patterns and trends in data. When one layer detects a pattern, it sends this information to the next pattern, which then sends it to the next pattern. This passing on of data continues until all layers are aware of the information.

The way the systems learn can be done differently by modifying or adjusting internal factors such as the strength of connection between layers. Changes have to be introduced slowly, however, because a drastic change in one layer can have an equally drastic effect on all succeeding layers. This is basically what’s taking deep neural networks a longer time to learn and be trained.

Pritzel claims they’ve found a workaround to address this issue through what they refer to as ‘neural episodic control‘. As the team told MIT Technology Review, using the said technique has resulted in ‘dramatic improvements on the speed of learning for a wide range of environments’ as their agent is able to ‘rapidly latch onto highly successful strategies as soon as they are experienced, instead of waiting for many steps of optimisation.’

DeepMind’s approach tries to replicate how learning happens in humans and animals. First, it copies what happens in the prefrontal cortex of the brain — identifying a familiar situation and behaving based on what it already knows about this. In case of an unfamiliar situation, it copies what happens in the hippocampus instead, a kind of ‘trial and error’ approach where behavior that results in a successful outcome is repeated and behavior that doesn’t is avoided in the future.

What accelerates the learning process is its ‘remembering everything’ instead of ‘selective remembering’.

“Our architecture does not try to learn when to write to memory, as this can be slow to learn and take a significant amount of time. Instead, we elect to write all experiences to the memory, and allow it to grow very large compared to existing memory architectures,” the team explained.

An AI that has the potential to be taught like a human is like a double-edged sword. On one end, it reinforces the threat that AI poses on the superiority that it might gain over the human race. On the other end, it also brings with it a promise of better things to come as it opens up a slew of new and exciting possibilities which will hopefully lead to the development of new technologies that can make our lives better.

Whichever the case will be, we’ll just have to wait and see.