The term “open source server” just took on a whole new meaning. This morning at an event in New York, Facebook director of hardware design and supply chain Frank Frankovsky announced the creation of a foundation to guide the Open Compute Project (OCP)—an effort initiated by Facebook engineers to bring the benefits of an open-source community to the problems faced in building efficient “Web-scale” data centers. Facebook, Intel, AMD, and Asus also have contributed intellectual property to the project, including motherboard and blade server specifications.

The OCP was launched by engineers at Facebook as a result of their experience in trying to build a highly efficient data center in Prineville, Oregon. The Prineville data center is the most efficient in the world in terms of power consumption, using 38 percent less energy than Facebook’s existing data centers and costing 24 percent less. With a power usage effectiveness (PUE) rating of 1.07, only seven percent of the power brought into the facility is used in the data center’s overhead and cooling. But getting there required Facebook’s engineers to custom-design servers, power supplies, battery backup systems, and server racks to accommodate a simplified power distribution system—using 480 volt distribution to reduce loss, rather than stepping it down—and minimize cooling requirements.

Facebook is hardly the only Web company that has had to design its own hardware. Google, Amazon, and others all have had to follow a similar path, according to Arista Network chief development officer and Sun cofounder Andy Bechtolsheim, who spoke at today’s event in New York. “Literally all the large-scale data centers in the world are built on off-the-shelf mother boards,” he said. “Because there was no standards, everyone had to do their own thing.” It would be better, he said, if there was a standard everyone could use for building out the sorts of systems used in Web data centers and cloud computing environments.

That’s the reasoning that led Facebook to launch the OCP in April, and publish the specs and designs of the hardware developed in the Prineville effort under the banner of the OCP; in an effort to kickstart more collaboration across the space in the model of open source software development. Now, Facebook has put the OCP under the auspices of the Open Compute Project Foundation, a nonprofit organization modeled after the Apache foundation, with the goal of getting rid of what Bechtolsheim, an Open Compute Foundation board member, calls “gratuitous differentiation” in hardware.

The other board members of the foundation include Goldman Sachs managing director Don Duet, Frankovsky, Rackspace chief operating officer Mark Roenigk, and Intel data center group general manager Jason Waxman. And Frankovsky said that a set of bylaws for the OCP Foundation have been established to govern how organizations submit contributions. He also introduced some of the other members of the foundation, which include Amazon, Asus, Dell (which is contributing to management standards), AMD, Cloudera, and Red Hat; Red Hat’s role will include certifying OCP hardware for Red Hat Enterprise Linux. Digital Realty, the data center hosting company, is also onboard.

Frankovsky also said that the foundation has formed a “strategic alignment” with the Open Data Center Alliance, a customer consortium made up of corporate IT organizations’ date center managers, and with a number of universities. “The University of North Carolina is looking at adding OpenCompute to their curricula,” he said, “and we’re also working with Georgia Tech.”

In addition to the contributions already made by Facebook, there have also been contributions made by Dell, Asus, Intel and AMD. Intel and Facebook worked together to submit “Wildcat” and “Windmill,” according to Intel’s Waxman—two Intel-designed motherboards. Intel’s Waxman said that the OCP Foundation would help “democratize” the process of how the industry optimizes hardware platforms.

“The whole industry has a proud tradition of how standards have accelerated innovation,” Bechtolsheim said. “What has been missing is a standard at the system level.” He cited the development of blade servers in particular, which started to address the issues that big data centers face in ease of management and hardware swapping, “but every company built their own blade chassis. Nothing is more frustrating to a customer than having a new box come in that has something different in it that doesn’t work with a particular application.”

James Hamilton, vice president and distinguished engineer at Amazon and a member of the Amazon Web Services team, said during the event’s kickoff that an open source approach to driving the efficiency of hardware in the data center is critical going forward as companies like Amazon scale up. "Every day we add enough capacity to support Amazon as a 2.7 billion dollar business—we bring in that much more capacity every day," he said, and saving money on that infrastructure is critical to staying profitable.

Hamilton said that the biggest costs associated with growing capacity isn’t data center space or power, but the hardware itself. And blade servers don’t solve that problem, because they cost more. “The floor space is four percent of the cost,’” Hamilton said, but the servers are 57 percent, and “there's no way you want to pay more on servers just to save on floor space.” He said the only innovation that vendors delivered with blade servers was “turning the servers 90 degrees.”

That’s a problem that Facebook was trying to address with one of its contributions to the OCP: the “Open Rack” specification, which Frankovsky called “blade servers done in open source.” The full 19-inch rack is a server blade chassis, in effect, with top-of-rack shared storage, and power distribution and battery backup integrated into the rack itself. By open-sourcing the specification and design, Frankovsky says, the hope is that systems vendors will “use the full chassis of the rack to innovate within that boundary.”

Scaling up in Europe

Facebook is applying the designs and standards developed in the Prineville effort to the other data centers it now has in the pipeline, according to Frankovsky, including the company’s first European data center in Lulea, Sweden, a town 60 miles south of the Arctic Circle.

That site, which will go live in 2014, will be three times the size of the Prineville facility, with three 300,000 square-foot server warehouses and two transformer buildings, will draw 120 megawatts of electricity exclusively from a nearby hydroelectric power station that generates twice as much electric power as the Hoover Dam. “The national [power] grid is extremely reliable in Sweden,” Frankovsky said, “so we were able to eliminate 70 percent of the generators onsite from the design.”

Facebook hasn’t discussed the price of the new data center, but previous reports in local media put the construction costs at around $760 million, with a contribution of up to $16 million from the Swedish government.