A new paper from researchers working in the UK and Germany dives into how much power the human brain consumes when performing various tasks — and sheds light on how humans might one day build similar computer-based artificial intelligences. Mapping biological systems isn’t as sexy as the giant discoveries that propel new products or capabilities, but that’s because it’s the final discovery — not the decades of painstaking work that lays the groundwork — that tends to receive all the media attention.

This paper — Power Consumption During Neuronal Computation — will run in an upcoming issue of IEEE’s magazine, “Engineering Intelligent Electronic Systems Based on Computational Neuroscience.” Here at ET, we’ve discussed the brain’s computational efficiency on more than one occasion. Put succinctly, the brain is more power efficient than our best supercomputers by orders of magnitude — and understanding its structure and function is absolutely vital.

Is the brain digital or analog? Both

When we think about compute clusters in the modern era, we think about vast arrays of homogeneous or nearly-homogeneous systems. Sure, a supercomputer might combine two different types of processors — Intel Xeon + Nvidia Tesla, for example, or Intel Xeon + Xeon Phi — but as different as CPUs and GPUs are, they’re both still digital processors. The brain, it turns out, incorporates both digital and analog signaling into itself and the two methods are used in different ways. One potential reason why is that the power efficiency of the two methods varies dramatically depending on how much bandwidth you need and how far the signal needs to travel.

The efficiency of the two systems depends on what SNR (signal to noise) ratio you need to maintain within the system.

One of the other differences between existing supercomputers and the brain is that neurons aren’t all the same size and they don’t all perform the same function. If you’ve done high school biology you may remember that neurons are broadly classified as either motor neurons, sensory neurons, and interneurons. This type of grouping ignores the subtle differences between the various structures — the actual number of different types of neurons in the brain is estimated between several hundred and perhaps as many as 10,000 — depending on how you classify them.

Compare that to a modern supercomputer that uses two or three (at the very most) CPU architectures to perform calculations and you’ll start to see the difference between our own efforts to reach exascale-level computing and simulate the brain, and the actual biological structure. If our models approximated the biological functions, you’d have clusters of ARM Cortex M0 processors tied to banks of 15-core Xeons which pushed data to Tesla GPUs, which were also tied to some Intel Quark processors with another trunk shifting work to a group of IBM Power8 cores — all working in perfect harmony. Just as modern CPUs have vastly different energy efficiencies, die sizes, and power consumption levels, we see exactly the same trends in neurons.

All three charts are interesting, but it’s the chart on the far right that intrigues me most. Relative efficiency is graphed along the vertical axis while the horizontal axis has bits-per-second. Looking at it, you’ll notice that the most efficient neurons in terms of bits transferred per ATP molecule (ATP is a biological unit of energy equivalent to bits-per-watt in computing) is also one of the slowest in terms of bits per second. The neurons that can transfer the most data in terms of bits-per-second are also the least efficient.

Again, we see clear similarities between the design of modern microprocessors and the characteristics of biological organisms. That’s not to downplay the size of the gap or the dramatic improvements we’d have to make in order to offer similar levels of performance, but there’s no mystic sauce here — and analyzing the biological systems should give us better data on how to tweak semiconductor designs to approximate it.

Much of what we cover on ExtremeTech is cast in terms of the here-and-now. A better model of neuron energy consumption doesn’t really speak to any short-term goals — this won’t lead directly to a better microprocessor or a faster graphics card. It doesn’t solve the enormous problems we face in trying to shift conventional computing over to a model that more closely mimics the brain’s own function (neuromorphic design). But it does move us a critical step closer to the long-term goal of fully understanding (and possibly simulating) the brain. After all, you can’t simulate the function of an organ if you don’t understand how it signals or under which conditions it functions. [Read: A bionic prosthetic eye that speaks the language of your brain.]

Emulating a brain has at least one thing in common with emulating an instruction set in computing — the greater the gap between the two technologies, typically the larger the power cost to emulate it. The better we can analyze the brain, the better our chances of emulating one without needing industrial power stations to keep the lights on and the cooling running.