Last week’s Open Compute Project Summit was kinda crazy with all the announcements of new members (the big one being Google), partnerships (the big one being Mellanox and Cumulus Networks) and contributions to OCP for both hardware and software.

Now that the summit is over and people are still grasping the gravity of the announcements and how it’s going to change the industry, let me provide some color to assist y’all.

A Primer Coat

This year, the OCP Networking Group created a video that kicked off the summit that summarized the group’s progress over the last 3 years along with the current heading for this year.

Hardware

Alpha Networks (the primary ODM for Juniper) submitted its 100Gb switch along with its 10GT (10Gb copper) platforms.

Facebook took the lid off the Wedge100 (32x100Gb, built by Accton) and the 6pack40 switches (128x40Gb, built by Accton) and has submitted them for review to the OCP Networking Group.

Edgecore Networks (Accton’s go-to subsidiary) submitted 10, count them 10, hardware contributions across different areas of networking: Data Center, Edge Routing, and Campus.

Data Center

Edgecore submitted two chassis systems, the OMP 800 and 1600 (256 and 512 ports of 100Gb Tomahawk systems), which are the largest Clos-in-a-box setups to date. Remember, each of these ports can be broken out into its respective 25/50/10/40 variants.

Edge Routing

Edgecore submitted the AS5912 edge switch, featuring the Qumran ASIC from Broadcom (48x10Gb + 6x100Gb) with some serious buffer space (up to 6GB).

Access/Campus

If that wasn’t enough, there are four access switches based on the AS4610 (30 and 54 port 1Gb copper with and without POE) and three wireless access points: 2×2 and 3×3 indoor and 3×3 outdoor. All the APs are 802.11ac POE based on Broadcom chipsets.

The best part, ONIE is now on wireless access points. This means the same way you provision data center chassis and switches is the same way you provision access switches is the same way you provision APs!

Finally, several hardware vendors are preparing to submit devices into those various categories, including some with new ASIC designs. Stay tuned.

Hardware Analysis

So, that was a lot of hardware announcements, but what does this all mean to the end users? It means open networking has gained some serious momentum in the last year alone.

To recap, there were only 3 OCP accepted switch designs prior to this summit. Last week, there were 12 OCP accepted switches (with 15 new hardware submissions last week alone).

Last year, all OCP switches were Broadcom-based. This year we have switches across various ASIC vendors: Broadcom, Cavium and Mellanox (with more on the way).

Last year, all hardware was 1U boxes for the data center. This year we have our first chassis systems, our first edge router, our first POE/access switches, and our first wireless access points.

I fielded many questions last week on why access switches and why wireless, so let me break it down for end users.

The first group of end users has a small data center presence (less than 100 boxes) but has lots of employees and thereby offices that need networking (we are talking about high hundreds to low thousands of POE/access switches with low tens of thousands wireless access points).

Sure, they could have been using open networking in their data center, but they would not see the payoff (CAPEX and OPEX savings) like some of their larger brethren with bigger data centers.

The second group of end users uses AWS and Azure for all their compute needs, but they still have a large number of employees that need supporting (hundreds to thousands of POE/access switches and thousands of wireless access points). This group can finally take advantage of open networking and break the vendor-lock-in choke hold.

Software

Microsoft generated some buzz with its contribution of Software for Open Networking in the Cloud (SONiC), a set of tools that build on top of SAI to provide platform support (sensors, fans, transceivers, et. al.) and L2/L3 routing. A deep-dive blog on SONiC will show up on Packet Pushers in the coming weeks.

For me, the biggest moment last week came in the form of an engineering workshop talk by Cumulus Networks.

The talk illustrated how awesome Linux Networking is by demonstrating an MLAG implementation, built on pure Linux kernel building blocks, that works across hardware vendors (e.g. Alpha Networks and Inventec using the same Broadcom chip) and also across ASICs (the demo used a Edgecore AS5712, Broadcom Trident2, and a Mellanox 2700 on a Spectrum chip).

If you’re not excited, let me remind y’all that typically an MLAG implementation is only supported within a given product family (e.g. only between Edgecore AS5712s). A more detailed blog post describing this MLAG implementation will show up on Cumulus’s website in the coming weeks (due to my association with Cumulus Networks).

Update 1: 2016-03-22, updated link to SONiC post.