Neurons store and transmit information in the brain.Credit: CNRI/SPL

Superconducting computing chips modelled after neurons can process information faster and more efficiently than the human brain. That achievement, described in Science Advances on 26 January1, is a key benchmark in the development of advanced computing devices designed to mimic biological systems. And it could open the door to more natural machine-learning software, although many hurdles remain before it could be used commercially.

Artificial intelligence software has increasingly begun to imitate the brain. Algorithms such as Google’s automatic image-classification and language-learning programs use networks of artificial neurons to perform complex tasks. But because conventional computer hardware was not designed to run brain-like algorithms, these machine-learning tasks require orders of magnitude more computing power than the human brain does.

“There must be a better way to do this, because nature has figured out a better way to do this,” says Michael Schneider, a physicist at the US National Institute of Standards and Technology (NIST) in Boulder, Colorado, and a co-author of the study.

NIST is one of a handful of groups trying to develop ‘neuromorphic’ hardware that mimics the human brain in the hope that it will run brain-like software more efficiently. In conventional electronic systems, transistors process information at regular intervals and in precise amounts — either 1 or 0 bits. But neuromorphic devices can accumulate small amounts of information from multiple sources, alter it to produce a different type of signal and fire a burst of electricity only when needed — just as biological neurons do. As a result, neuromorphic devices require less energy to run.

Mind the gap

Yet these devices are still inefficient, especially when they transmit information across the gap, or synapse, between transistors. So Schneider’s team created neuron-like electrodes out of niobium superconductors, which conduct electricity without resistance. They filled the gaps between the superconductors with thousands of nanoclusters of magnetic manganese.

By varying the amount of magnetic field in the synapse, the nanoclusters can be aligned to point in different directions. This allows the system to encode information in both the level of electricity and in the direction of magnetism, granting it far greater computing power than other neuromorphic systems without taking up additional physical space.

The synapses can fire up to one billion times per second — several orders of magnitude faster than human neurons — and use one ten-thousandth of the amount of energy used by a biological synapse.

An artificial synapse connects with high-speed electrical probes.NIST

In computer simulations, the synthetic neurons could collate input from up to nine sources before passing it on to the next electrode. But millions of synapses would be necessary before a system based on the technology could be used for complex computing, Schneider says, and it remains to be seen whether it will be possible to scale it to this level.

Another issue is that the synapses can only operate at temperatures close to absolute zero, and need to be cooled with liquid helium. Steven Furber, a computer engineer at University of Manchester, UK, who studies neuromorphic computing, says that this might make the chips impractical for use in small devices, although a large data centre might be able to maintain them. But Schneider says that cooling the devices requires much less energy than operating a conventional electronic system with an equivalent amount of computing power.

A﻿lternative approach

Carver Mead, an electrical engineer at the California Institute of Technology in Pasadena, praises the research, calling it a fresh approach to neuromorphic computing. “The field’s full of hype, and it’s nice to see quality work presented in an objective way,” he says. But he adds it would be a long time before the chips could be used for real computing, and points out that they face stiff competition from the many other neuromorphic computing devices under development.

Furber also stresses that practical applications are far in the future. “The device technologies are potentially very interesting, but we don’t yet understand enough about the key properties of the [biological] synapse to know how to use them effectively,” he says. For instance, there are many outstanding questions about how synapses remodel themselves when forming a memory, making it difficult to recreate the process in a memory-storing chip.

Still, Furber says that because it takes 10 years or more for new computing devices to reach the market, it is worth developing as many different technological approaches as possible, even as neuroscientists struggle to understand the human brain.