In the past week, both AMD and Intel have given us a tantalizing peek at their next-generation neuromorphic (brain-like) computer chips. These chips, it is hoped, will provide brain-like performance (i.e. processing power and massive parallelism way beyond current CPUs) while consuming minimal amounts of power.

After announcing last year at its Fusion Developer Summit that its Heterogeneous System Architecture (HSA) would be an open, architecture-agnostic spec that could be implemented by anyone (including Intel), AMD last week announced that its future APUs will feature an ARM Cortex-A5 core to implement TrustZone, ARM Holdings’ security and DRM solution. AMD also announced that has teamed up with ARM, Imagination Technologies, MediaTek, and Texas Instruments to form the HSA Foundation. The idea is that this non-profit consortium will try to coalesce around a single HSA specification, primarily so that developers can create software that makes full use of the various flavors of compute power available to them.

It isn’t too crazy to think that a future AMD (or Texas Instruments) chip might have a few GPU cores, a few x86 CPU cores, and thousands of tiny ARM cores, all working in perfect, parallel, neuromorphic harmony — as long as the software toolchain is good enough that you don’t have to be some kind of autist to use all of those resources efficiently.

Intel’s neuromorphic chip design is very different indeed, involving two rather nascent technologies: multi-input lateral spin valves (LSV) and memristors. LSVs are microscopic magnets that change their magnetism to match the spin of electrons being passed through them (spintronics). Memristors are electronic components that increase their resistance as electricity passes through them one way, and reduce their resistance when electricity flows in the opposite direction — and when no power flows, the memristor remembers its last resistance value (meaning it can store data).



By wiring up LSVs and memristors into a cross-bar switch lattice (pictured above), Intel claims it can build a neuromorphic CPU. The idea seems to be that the LSVs act as neurons, while the memristors act as synapses, with the resistance value equating to the “weight” (importance) of the synaptic link. We’re talking about incredibly small components here (probably tens of nanometers), so in theory Intel might be able to build a chip with billions of neurons and synapses — a far cry from the hundred trillion synapses in the human brain, but then again our brains only have a clock speed (refresh rate?) of around 100Hz. Intel’s neuromorphic chip would presumably operate in the gigahertz or terahertz range.

When we’ve covered brain-like CPUs before, their focus has always been on imitating the massive parallelism of the human brain. Animal brains have another incredible trait, though: They’re ultra-low-power devices. A single human brain is more powerful than the fastest supercomputer on the planet, and yet it consumes just 30 watts.

The Intel researchers posit that their neuromorphic chip can also reach a similar efficiency. Unlike state-of-the-art CMOS transistors that require volts to switch on and off, the LSV neurons only require a handful of electrons to change their orientation, which equates to 20 millivolts. For some applications, Intel thinks its neuromorphic chip could be up to 300 times more energy efficient than the CMOS equivalent.

The one caveat, though, is that this spin-based chip hasn’t actually been built — it’s just a theoretical design that has been simulated on some powerful (conventional!) computers. To my eyes, though, the implementation looks sound — and it can be built using current semiconductor processes, which is handy. Memristors are maturing quickly, and spintronics, because of its ultra-low-power potential, is receiving a lot of attention by research groups all around the world.

As we move from multi-core CPUs to many-core Intel Larrabee/Knights Ferry processors with 50+ cores, heterogeneous AMD Trinity/Kaveri APUs with multiple FPUs and hundreds of individual graphics cores, and the neuromorphic chip detailed here, the decades-old archetypal definition of “CPU” is blurring and morphing into something else entirely. Rather than thinking of a CPU as a collection of transistors, it almost looks like compute cores (or artificial neurons) will become the basic building blocks of processors. Today we’re talking about 2,000 shader cores — in a few years, it might be a few million.

As we’ve covered before, transistor-based silicon chips aren’t going anywhere for a long time yet — but as long as Intel and AMD make it easy enough to program a neuromorphic CPU, I think we’re surprisingly close to the end of simulating massively parallel neural networks on serial hardware and actually building a brain on a chip.