Neuromorphic chips were first developed by Carver Mead at Caltech back in the late ’80s. He made many of the most significant advances to date in analog VLSI, and also later developed a low-power silicon retina. One of his students, Kwabena Boahen, continued work on the silicon retina and was able to mimic many features of real retinas including luminance adaptation and contrast gain control. Neuromorphic computing designs have yet to compete with traditional computing architectures, which continue to impress. For instance, IBM announced this past November that its Blue Gene/Q Sequoia supercomputer could clock 16 quadrillion calculations per second, and could crudely simulate more than 530 billion neurons. What they did not advertise though, is that to do this, Blue Gene consumes nearly 8 megawatts, enough to power 1600 homes. Boahen has now developed a new computing platform he calls Neurogrid, that runs around 100,000 times more efficiently. Each Neurogrid board, running at 5 watts, can simulate detailed neuronal activity of one million neurons — and it can now do it in real time.

Real-world applications for neuromorphic computers have been a little slow to come. Part of the problem has been that regular old sequential computers could still simulate neuron networks with a lot less human effort. Some degree of parallelism in these designs has been achieved using GPUs and FPGAs, but they still use transistors in fundamentally the same way: digitally. Digitally simulating ion channels in software-generated neurons is the main source of computational overhead. Where simulation refers to capturing essential properties with software, emulation refers to hardware. With the neuromorphic approach, the flow of ions through channels is emulated directly with the flow of electrons through transistors. Instead of just using the on or off behavior, the full dynamics of transistors, clustered into groups of 6 or 8, are used to emulate compartments of neurons.

Mead’s early designs linked transistorized neurons directly in hardware. That approach has its merits, and is a more physically accurate realization of neuronal interconnection. On a two dimensional chip though, implementing the connections between neurons virtually would be quite forgivable. Boahen took this second approach, giving each neuron an address which points to a local memory location stored in RAM. The RAM location then holds the synaptic target’s address. When the target address is fed back to the chip, a mini voltage, a synaptic potential is triggered at the target. Soft-wired virtual synapses can be encoded, translated, and decoded fast enough to route millions of spikes per second. What is more interesting and relevant for hardware brains which have memory and can learn, is that these softwires can be rerouted by overwriting the RAM’s look-up table.

Researchers working with a 2048-processor Blue Gene rack simulated a single second of the activity of 8 million neurons connected by 4 billion synapses in a little over an hour. The ion channel equations representing those neurons had to be evaluated 40 trillion times. By comparison, a Neurogrid board has its roughly one million neurons spread across 16 chips, with 65,000 neurons each. A Neurogrid neuron has two discrete compartments and can be defined by around 80 paramters. A million neurons is enough to do some interesting processing. For example, light from pixel arrays can be integrated and parsed for directional motion detection, or center-surround inhibition and other image processing techniques. Multiple degrees of freedom for robot arms could be neuromorphically controlled to evolve precision movements and solve complex tasks. Older iterations of Neurogrid ran slow, like the Blue Gene. The new Neurogrid board now can do things in a temporally more interesting manner: real time. Boahen and others now expect this to bring neuromorphic computing out of the lab, and into real world applications.

Now read: World’s first petaflop supercomputer is obsolete after just five years, will be shut down