The evolution of computers, and the evolution of life, share the common constraint that in going forward beyond a certain level of complexity, advantage goes to that which can build on what is already in hand rather than redesigning from scratch. Life’s crowning achievement, the human brain, seeks to mold for itself the power and directness of the computing machine, while endowing the machine with its own economy of thought and movement. To predict the form their inevitable convergence eventually might take, we can look back now with greater understanding to the early drivers which shaped life, and reinvigorate these ideas to guide our construction.

Life is, in effect, a side reaction of an energy-harnessing reaction. It requires vast amounts of energy to go on.

Nick Lane, author of a new paper in the journal Cell, was speaking here about critical processes in the origin of life, though his words would also be an apt description of computing in general. His paper, basically, puts forth bold, new ideas for how proto-life forms originated in deep-sea hydrothermal vents by harnessing energy gradients. The strategies employed by life offer some insights into how we might build the ultimate processor of the future.

Many of us have read claims regarding the information storage capacity and processing rate of the human brain, and have wondered — how do they measure that? Well the fact is, they don’t. With our limited understanding of how living systems like the brain work, it is folly at this point to attempt any direct comparison with the operation of computing machines. Empirical guesswork is often attempted, but in the end it is little more than handwaving.

Google, while clearly not a brain of any kind, certainly processes a lot of information. We might ask, how well does it actually perform? It is easy enough to verify that a typical search query takes less than 0.2 seconds. Each server that touches the operation, spends perhaps a few thousandths of a second on it. Google’s engineers have estimated the total work involved with indexing and retrieval amounts to about 0.0003 kWh of energy per search. They did not indicate how they estimated this number, but if we think about it, it is a fascinating result, despite their unfortunate prefixing of the units. Suppose we take the liberty of defining this quantity, the energy per search, as a googlewatt. Such a measure would be a convenient way to characterize a computing ecosystem in much the same way that the Reynolds number qualitatively characterizes flow conditions across aerodynamic systems.

One might then ask, if the size of a completely indexed web crawl is constantly expanding while the energy per elementary search operation contracts with improvements in processor efficiency, how might the googlewatt scale as the ecosystem continues to evolve? In other words, can we hope to continue to query a rapidly diverging database at 0.3Wh per search, or in dollar terms — at $0.0003 per search?

For sake of putting energy-per-search on more familiar terms, Google notes that the average adult requires 8000 kilojoules (kJ) a day from food. It then concludes that a search is equivalent to the energy a person would burn in 10 seconds. No doubt brains perform search much differently from Google, but efforts to explore energy use by brains have proved to be confounding. PET scanning, for example, is not a very reliable tool for localizing function to specific parts of the brain. Furthermore, its temporal resolution is pitiful. It is, however, not too bad at measuring global glucose utilization, from which energy use can be inferred. Subjects having their brains imaged by PET scanner while performing a memory retrieval task frequently appear to utilize less energy than when resting. So if we accept the bigger picture in some of these studies, we often see the counterintuitive result that the googlewatt for a brain, at least transiently and locally, can sometimes take on a negative value. This is not totally unexpected since inhibition of neuron activity balances excitability at nearly every turn. The situation is may be likened to that of a teacher silencing the background din of an unruly class and demanding attention at the start of class.

To try to find a more relevant comparison to biological wetware, let’s take a quick look at IBM’s Watson, the supercomputer of Jeopardy! fame. When operating at 80 teraflops, it is processing some 500GB — the equivalent of a million books — per second. To achieve this kind of throughput, Watson replicates the 4TB of data in its filesystem across 16TB of RAM. While no longer the state-of-the-art, Watson is certainly no slouch.

Each of Watson’s 90 Power 750 server nodes has four processor chips, making for a total of 32 cores per node. Each 567mm chip, fabricated with a 45nm process has 1.2 billion transistors. The Power 750 server was based on the earlier 575 server but was designed to be more energy efficient and run without the need for water cooling. As it is air-cooled, the 750 can not consume more than 1600W of power and is therefore limited to 3.3GHz. The 575 could handle 5400W and be run a bit higher at 4.7GHz. Just in case you might be wondering where these processor speeds come from in the first place, it may be comforting to know that they are probably not just pulled out of a hat. They appear to be part of a sequence known as the E6 preferred number series, which IBM must have a special fondness for, and which is of course, eminently practical.

Next page: Why Watson’s specs matter…