Anyone paying attention to recent technology headlines knows that buying servers is just one part of the total cost. It costs power to run them, and power to cool them, and power costs money. AMD has just sponsored a study by Lawrence Berkeley National Laboratory staff scientist Jonathan Koomey that tries to answer the question: just how much power do US servers slurp down each year?

Koomey, who is also a consulting professor at Stanford, claims that his analysis is the most comprehensive to date and is based on the best available data from IDC. He concludes that in 2005, the total power consumption of US servers was 0.6 percent of overall US electricity consumption. When cooling equipment is added, that number doubles to 1.2 percent—the same amount used by color televisions.

Between 2000 and 2005, server electricity use grew at a rate of 14 percent each year, meaning that it more than doubled in five years. The 2005 estimate shows that servers and associated equipment burned through 5 million kW of power, which cost US businesses roughly $2.7 billion.

Koomey notes that this represents the output of five 1 GW power plants. Or, to put it another way, it's 25 percent more than the total possible output from the Chernobyl plant, back when it was actually churning out power and not sitting there, radiating the area.





Data source: Jonathan Koomey

If current trends continue, server electricity usage will jump 40 percent by 2010, driven in part by the rise of cheap blade servers, which increase overall power use faster than larger ones. Koomey notes that virtualization and consolidation of servers will work against this trend, though, and it's difficult to predict what will happen as data centers increasingly standardize on power-efficient chips.

Server power usage has become a big enough issue to interest the Environmental Protection Agency. Andrew Fanara, who heads the team that develops the Energy Star specifications, applauded the new research and expressed hope that it would spur changes in the industry. "The Environmental Protection Agency (EPA) applauds AMD and this latest benchmarking effort to better understand the global impact data centers have on energy consumption," he said. "We are looking forward to continuing our work with the IT industry to forge new, energy-efficient solutions that benefit both consumers and our global environment."

AMD hopes that these energy-efficient solutions will include their chips, and they now feature an entire line of efficient processors. They've even gone so far as to develop a ticker that shows the worldwide cost of not using AMD servers—an amount that currently stands at $1 billion.

Not that Intel has been sitting still. The rollout of the Core platform has shown what the chip giant can do when it sets its collective mind to dealing with power-per-watt concerns, and Intel has been aggressively trumpeting the efficiency of its chips in speeches and through whitepapers for more than a year.

It's hard to find any downside to the current focus on efficiency, unless you happen to be the CEO of Exelon. It's become so popular to "go green" (and save money) that the SPEC benchmarking consortium has even drawn up a metric for measuring power efficiency.

Saving money and lessening the Internet's environmental impact should please both baby seals and corporate suits, something that's tough to do, so more power to those working to use less.