Over the past three years, Facebook has saved $2 billion in infrastructure costs and enough energy to power almost 80,000 homes due to the work of the Open Compute Project and its own internal engineers to create highly-efficient data center systems, according to officials.

The Open Compute Project (OCP) is an effort Facebook officials kicked off in 2011 as they looked for ways to rapidly scale their massive data centers without burning through huge amounts of energy or piles of cash. The giant social networking company, competing with the likes of Google and Amazon, saw the chance to open source hardware, similar to Linux's impact on the software field.

More than three years later, the OCP counts many of the largest tech vendors as members, and continues to expand its reach in the data center, from servers to networking to storage. And the benefits to Facebook have been significant—the carbon savings related to the power saved through the use of open hardware is equal to taking 95,000 cars off the road, officials said March 10, the first day of the OCP Summit in San Jose, Calif.

Also at the event, Facebook officials talked about several projects within the OCP the company has been working on, including an effort with Intel to develop the group's first system-on-a-chip (SoC) compute server. Named "Yosemite," the chassis holds four SoC processor cards and provides the flexibility and power efficiency for scale-out data centers. Facebook is contributing the design to the OCP that group members can adopt and build on.

In a post on the company blog, Facebook engineer Hu Li said the social networking company over the years has adopted the scale-out approach to hardware design—leveraging increasingly larger numbers of simple, efficient systems that offer a "moderate amount" of computing power—over the scale-up model of putting more power into a given computing system. Facebook found that two-socket systems, which form the foundation of most scale-up systems, were powerful, but were too big and consumed too much power.

To enable its infrastructure to scale out, Facebook turned to the Yosemite design, Li wrote.

"We started experimenting with SoCs about two years ago," he wrote, noting that at the time, SoCs on the market "were mostly lightweight, focusing on small cores and low power."

Most consumed less than 30 watts, and at first Facebook was putting up to 36 SoCs into a 2U (3.5-inch) chassis, adding up to 540 SoCs per rack. However, the single-thread performance was too low, which led to greater latency in the company's Web platform. That convinced engineers to find chips with more power but that still adhered to the SoC design.

In Yosemite, each server node holds a single SoC with a 65W thermal design power (TDP), multiple memory channels, at least one solid-state drive (SSD) interface and a local management controller, Li said. The Yosemite system holds four SoC server modules that consume up to 400W of total power, or about 90W, for each server module.