Thanks to a sleek new computer chip developed by IBM, we are one step closer to making computers work like the brain.

The neuromorphic chip is made from a phase-change material commonly found in rewritable optical discs (confused? more on this later). Because of this secret sauce, the chip’s components behave strikingly similar to biological neurons: they can scale down to nanometer size and perform complicated computations rapidly with little energy.

What makes them especially amazing is how they “fire.” They integrate previous input history to determine whether or not to activate. They also show a characteristic trait of biological neurons called stochasticity — that is, when given a similar input, the chip always produces a slightly different, unpredictable result. Stochasticity is the basis of population coding, a type of highly efficient computation that relies on groups of neurons working together. This neuronal quirk was previously tough to mimic using artificial materials.

The chip adds to previous brain-like computing memristors, says Dr. C. David Wright at the University of Exeter to Singularity Hub. It’s a huge leap forward for “building dense, large-scale, interconnected synapses to provide fast neuromorphic processors,” he says.

Brain-like computation

Scientists have long dreamed of making computers that mimic the massive parallel computational ability of the brain’s neuronal networks. That’s a hefty goal.

“Brains fuse together processing and memory tasks…using surprisingly little energy and occupy a remarkably small volume,” explains Wright. The human brain consumes about 10 to 20 watts of power and occupies less than 2 liters of space, he says. Traditional silicon transistor-based circuits, with tough-to-shrink capacitors, are simply too clunky to cram into brain-like circuits. They also process information serially in strings of binary digits, a far cry from biological neural computation.

So how do neurons work?

In a nutshell: a neuron receives input through long cables called dendrites. This input changes the electrical potential across its cell membrane. The neuron keeps track of various input signals that occur over a small time window and integrates them. When the aggregated signal reaches a certain threshold, the neuron bursts into activity and generates a spike. The spike is then passed down the output cable — the axon — and transmitted to downstream neurons through small mushroom-shaped blobs called synapses.

This “integrate-and-fire” principle heavily relies on the biophysics of the neuronal membrane. Previous neuromorphic chips mostly focused on mimicking information processing at the synapse, paying little attention to how neurons actually fire. And that’s where IBM’s new chip differs: it eschews the synapse, opting instead to simulate the generation of spikes in a neuron.

“In a complete system, of course, we need both neurons and synapses,” says Wright, so being able to mimic both in hardware is huge.

The phase-change chip

To build the chip, the team enlisted a phase-change material to play the part of a neuronal membrane. The material, a chalcogenide alloy, exists in two physical phases — a glassy, almost liquid-like amorphous state and a solid, crystalline state — that rapidly switch when the material is zapped with electricity.

Each phase has its own electrical properties, making it easy to determine what state the material is in — an ideal situation for storing binary data. Here, the amorphous phase insulates, whereas the crystalline state conducts.

The artificial neuron begins in the amorphous, insulating state. When given multiple pulses of electricity (“inputs”), it progressively crystalizes until it reaches a certain threshold. At that point, the material becomes solid enough to conduct electricity, which causes it to fire an output spike. If this sounds familiar, you’re right: that’s exactly how integrate-and-fire works in biological neurons. After a brief period of rest, the chip shifts back to the amorphous state, ready for another cycle.

What’s more, due to the manufacturing process and variable internal atomic states, the chip is inherently stochastic. That’s a big deal.

“Stochasticity is an essential ingredient for constructing ‘neuronal populations’ and our brain naturally uses these to represent signals and cognitive states,” says lead author Dr. Tomas Tuma.

So what can the new chip do?

To test the power of their phase-change neurons, the team engineered a mushroom-shaped gadget consisting of a 100-nanometer-thick layer of chalcogenide alloy sandwiched between two electrodes. That counts as a single neuron. In one demonstration, the team generated 1,000 streams of binary data, of which 100 of them were statistically correlated — that is, some streams showed a weakly similar pattern to others (note this is a “toy” dataset without any real-life meaning).

Fishing out correlations like these is generally tough to do since it requires a computer to simultaneously look at multiple streams and compare the information in real-time. However, a single artificial neuron managed to pick out every correlation using very little power.

That’s a computational task of surprising complexity, notes Wright.

“When applied to social media and search engine data, this leads to some remarkable possibilities, such as predicting the spread of infectious disease, trends in consumer spending and even the future state of the stock market,” he writes in a comment piece published alongside the study in Nature Nanotechnology.

To check out the scalability of their neurons, the IBM team interconnected 100 phase-change devices in a 10-by-10 array and strung five arrays together to form a population of 500 artificial neurons. The team then fed this artificial network a stream of broadband signals, which contained rates higher than the firing rates of individual neurons.

Here’s the cool part. Because each neuron is stochastic, their combined activity — the so-called population code — was sufficient to adequately represent the signals without additional costly operations. In other words, the network functioned far above the computational limits of its single components. And it did so using just a spark of power: on average, the network only required about 120 microwatts.

“This is important for building dense, scalable neuromorphic systems for memory applications and computing,” explains Tuma. For example, they could power machines with co-located memory and processing units, thus shattering the bottleneck of traditional Von Neumann computers, in which memory and processing are physically separated.

Wright agrees that the chip has significant potential, but also warns of its issues. The limited number of times that these devices can be switched before failure could significantly limit processor lifetimes, he writes. Shifting the device back to the amorphous state after an activation cycle is also energy consuming, which could become a concern once these artificial neuron arrays get larger.

That said, Wright is incredibly impressed with the chip.

“Phase-change and memristor devices can work up to a million times faster than the processing speeds of the human brain, we can imagine some very powerful computing systems,” he says.

Now comes the hard part: writing software that takes maximal advantage of the chip’s computational prowess.

Banner Image Credit: IBM Research