In Facebook data centers, the meaning of the words “six pack” no longer has anything to do with beer or abs.

During the Networking @Scale event at its Menlo Park, California, headquarters Wednesday, the company’s infrastructure engineers unveiled a new element of the next-generation Facebook data center fabric. Called Six Pack, the networking switch uses its six hot-swappable modules to enable any piece of IT gear in the data center to talk to any other piece of IT gear.

Facebook, like other operators of massive data centers – companies like Google or Amazon – designs almost all of its own hardware. At these companies’ scale, the approach is more effective and economical than buying off-the-shelf gear from the usual IT vendors, because they get exactly what they need, and because suppliers compete with each other for the big hardware purchases they make on a regular basis.

The Six Pack is a core element of the new network fabric that was unveiled last November. The design relies heavily on Facebook’s Wedge switch, the one its vice president of infrastructure engineering Jay Parikh previewed at GigaOm Structure in San Francisco in June 2014.

The seven-rack-unit chassis includes eight 16-port Wedge switches and two fabric cards. The ports are 40 Gigabit Ethernet. Top-of-rack switches aggregate connectivity in the racks, and Six Packs interconnect all the top-of-rack switches.

The fabric makes the network a lot more scalable than Facebook’s traditional approach, which was to build several massive server clusters in a data center and interconnect them. The more the individual clusters grew, the more congested the inter-cluster network links became, putting a hard limit on cluster size. There are no clusters in the new architecture.

Facebook is planning to make the Six Pack design open to the public through the Open Compute Project, its open source hardware and data center design initiative. Data center operators and vendors will be able to use the design or modify it to build their own switches and network fabrics.

The big thing about Wedge was disaggregation. Individual elements of the architecture could be mixed and matched and upgraded independent of each other. That aspect of the design was preserved in the Six Pack.

“We retained all the nice features that we had,” Yuval Bachar, Facebook hardware engineer, said. “Modularity of subcomponents, as well as up-the-stack disaggregation of software and hardware.”

Unlike traditional off-the-shelf network switches sold by Cisco and other big network vendors, Facebook’s homegrown switch hardware is not closely coupled with network software. The company has written its own Linux-based network operating system, called FBOSS, and adopted all of its Linux-based server management software tools for network management.

Facebook also took a different approach to software defined networking than companies that sell commercial SDN tools take. Instead of taking the control plane (the intelligence of the network) out the switches and putting it into a separate controller, each switch in the fabric has a control plane, Bachar explained.

There are no virtual network overlays. “We are using pure IP network,” he said. There are external controllers as well. “It’s a hybrid SDN, which we find to be very effective, because our switching units are completely independent.”

The announcement doesn’t mean Facebook has replaced all the network gear in its data centers with the new systems. “Right now, we’re just starting our production environment deployment – both Wedge and Six Pack,” Matt Corddry, director of engineering at Facebook, said. The company usually tests new pieces of infrastructure by running some production traffic on them in multiple regions before full-blown implementation.