Analysis Google, Facebook, and likely other major tech firms are investigating ARM-compatible chips to drive low-power servers, which could ultimately shakeup the types of processors that sit inside data centres.

We were moved to consider the pros and cons of moving away from trusty x86 into the problem-filled world of ARM after a Bloomberg rumor circulated on Thursday that Google was planning to use ARM chips for servers, and after we spotted a smoking gun blogpost by Facebook that indicated it hoped to evaluate ARM systems for production use.

Companies are interested in ARM chips due their historically thrifty power consumption; right from the start, the processor architecture – born from a small team at Acorn Computers in the 1980s – was designed to be RISC (Reduced Instruction Set Computing). This means ARM cores do lots of simple operations relatively quickly; this simplicity, and the lack of legacy cruft to support, keeps the transistor count low, which means less power is required.

This customisable architecture is wildly popular in the battery-powered gadget and embedded electronics worlds, where processing performance isn't key – anything taxing can be offloaded to dedicated silicon – so the chips can be run slower and thus consume even less power.

Compare this to Intel's CISC (Complex instruction set computing) design, which offers a larger range of operations, has a mountain of legacy tech to support – from 16-bit real mode all the way up to 64-bit protected long mode – and generally runs at higher speeds to give punters the most bang for their bucks. All this adds up to beefier, power guzzling packages.

Though ARM cultists portray RISC as being fundamentally better at low-power computation, academic studies have disproved this [PDF], noting instead that the main differences in power consumption come from historical implementations – ARM has spent nearly two decades living in your pocket, whereas Intel has resided in your honking big desktop, and so on. Indeed, the heart beating in today's x86 chips is RISC in design albeit wrapped in a CISC compatibility blanket.

ARM has for a long time focused on cutting power consumption due to its home markets being mobile and non-performance-demanding devices, whereas Intel previously emphasized speed; chips powered by ARM cores are built from the ground up to sip rather than suck current. The drawback is that they beaver away at a relatively leisurely pace.

Mobes and fondleslabs, ARM. Gaming rigs, x86. Got it. Where does Google and Facebook fit in?

Consumer-serving web giants spend billions a year on infrastructure and millions on electricity bills. For these companies, data centres are a great sucking chest wound in their annual budget, and if you can save money, you will.

A key question here is how much of a company's given workload can be shifted onto minimalist, low-power and slower processors. For Google, its compute-intensive backend needs the sheer might of Intel's powerful Xeon chips, but for other tasks such as front-end web serving, and the controlling of storage, it may make sense to deploy ARM.

Facebook is a simpler proposition – its newly minted AI group aside – as the social network's workload is less CPU-intensive than Google's. But even here there are problems: "What we would like to see in an ARM server CPU is a ... minimum frequency of 2.5GHz," Facebook tech strategist Vijay Rao said in late October. "That gives us the ability to scale across our software from frontend to backend."

ARM server startup Calxeda's most recent 32-bit Midway chips had a clock rate of between 1.1 and 1.8GHz, so there is a way to go here. (Yes, there's more to performance than clock rates, but as a general rule of thumb, a part's GHz rating will do for the moment.)

Meanwhile, 64-bit ARM servers are due to come to market next year, with the first likely due from startup Applied Micro, at which point companies will be able to benchmark and rank the tech.

Facebook is known to want to buy community-designed computers – its hardware supply kingpin Frank Frankovsky just joined Calxeda's board – while Google may decide to go ahead and create its own custom kit, including a processor licensed from ARM.

Don't forget that ARM's business model is all about selling copies of its blueprints to the likes of AMD, Samsung, Broadcom and Calxeda, which manufacture customised chips to suit their individual needs and products. Thus, Google will need a partner to fab its chips.

Before that, Google would have to employ about 50 engineers and spend a year or so drawing up its chip, which could be based on the 64-bit Cortex-A57, then another year integrating it with its tech, suggested Calxeda's marketing chief Karl Freund, and would likely need to spend tens of millions of dollars. For Google, which minted $2.97bn in net income in its most recent financial quarter, this is a trivial expense.

Just some servers chilling out ... Facebook's 'cold storage' arrays could soon be sitting alongside ARM-powered kit

The next step would be porting Google's software over to ARM systems. The ad giant – a one hundred percent Linux shop, at least in the data centre – has worked hard to create a relatively small, monolithic software stack: assuming it isn't using a Facebook-style x86-64 translator for its code, testing and deploying on ARM will be straightforward for Googlers, although there are all sorts of optimisations that should be explored.

The company has also invested heavily in developing the expertise required to build its own processors. In 2010 it acquired a stealthy chip designer named Agnilux run by refugees from the Apple-bought PA Semi. At the time, the company was thought to be working on either ARM or PowerPC server chips.