In Q1 2019, Packet is releasing notable microserver gear into its cloud. Packet is a bare metal hosting provider that is known for democratizing the spread of new compute architectures. It just so happens, Packet’s cage in Sunnyvale, California is in the same data center as our primary lab so we have seen them grow over the years. With the new microservers, we expect to see something different. They are going to come in Intel, AMD (more on that soon), and Ampere flavors. They will come in Open19 chassis. Perhaps most interestingly, the microservers will incorporate SmartNICs from Netronome. We had a chance to chat with Packet and Netronome, about the announcement, and have a few of our hardware speculations. Hint: this may be the first cloud AMD EPYC 3000 you can try.

Packet Microservers with Netronome SmartNICs

The first part of the announcement is that Packet is rolling out microservers. Although microservers were seen as a huge industry trend years ago, as we noted in our 2014 piece: Intel can hold ARM (largely) out of the data center for 3 years, the buzz has died down. At the same time, companies like Facebook are using Intel Xeon D microservers as their front-end web hosting nodes. With its new generation, Packet is not just introducing a new microserver service, but it is also adding something to make those microservers more useful: Netronome SmartNICs.

Here is why that matters. With microservers, a challenge is that high-speed network processing takes a good amount of power. When you have 20, 40, or even 100+ large cores per server, losing a handful of cores to doing useful work such as packet processing becomes an acceptable trade-off. In the microserver world, a key challenge is that doing network functions on CPU cores has a bigger impact when you have an 8-core CPU and two cores are doing packet processing. At that point, a quarter of your compute resources are used for network functions. The slide the companies showed was more geared towards larger servers, using what looks like dual 28-core Platinum 8180 CPUs like we saw in our Dell EMC PowerEdge R740xd and R640 reviews.

Now, the networking vendors are striking back giving rise to SmartNICs. Packet is using Netronome SmartNICs. These SmartNICs allow developers to write code and deploy it via eBPF (somewhat analogous to DPDK, but worth a read), offloading network processing functions to the NIC instead of the CPU.

One area we will take issue with, the power numbers, at least for the 1U server with an Intel Xeon Platinum 8180 on this slide take a bit of creative license veering away from reality. Fully configured dual Platinum 8180 systems in the lab generally use no more than 700W running AVX-512 simulations and more like 500W under load outside of AVX-512 as we showed. The systems we used for those tests had NVMe and SAS drives, multiple NICs, 384GB of RAM, and so forth so those are not directly comparable to microserver configurations. If you take 400W for four Microserver nodes, that is more than we would expect to see from the single Xeon Platinum 8180 system with a traditional NIC to the left. Then again, we test these things and most will gloss over a marketing slide like this.

Back from the tangent, microservers are less expensive to run per node than larger machines for applications like load balancers, proxies, and VPN nodes. Another major benefit is that Packet has a great setup for developers who want to try something different. Big companies like Microsoft are using SmartNICs to accelerate performance in key areas by disaggregating functions and offloading more from the CPU. With the Packet and Netronome SmartNIC microservers, the average developer now has access to this class of technology, and that is a good thing.

Some of the other benefits are that you can do things like change the NIC pipeline, update NIC firmware on the fly, and run applications directly on the NIC itself. These are features that traditional NICs do not have.

If you are a developer and want to try SmartNICs without procuring hardware, this is going to be a great option. Likewise, if you are using the Packet cloud or want to deploy SmartNIC enabled services, this is going to be a leading option in the space.

On those microservers, we are going to delve into hardware, even though that is not the focus of a cloud offering.

Is a Packet AMD EPYC 3000 Microserver Coming?

On the call, Intel was listed as a vendor for the Microservers. So was Ampere. We are quite excited to see the Ampere eMAG come out in a cloud. Packet extolled the virtues of Open19, a standard that OCP has largely eclipsed and that is backed by LinkedIn and now Packet as a hardware form factor to ease deployment. There will be 1U and Open19 half-width microserver options with four microservers per sled.

Each of these microservers will have a CPU and a Netronome NIC. We can also see two DIMMs (they look like G.Skill brand DIMMs from the rendering) and two M.2 drives per node. Under the small heatsink of each node is the Netronome SmartNIC. The SmartNICs are aligned in a mesh then sent to a shared network port.

Immediately when I saw this picture something clicked. I have seen something that looks like what is being depicted in our lab.

Here is a shot from our Piecing Together the iEi Puzzle AMD EPYC 3000 Spotted in the Wild piece where we were the first site to spot the AMD EPYC part outside of the AMD EPYC Embedded 3000 Series Launch event in London. It looks very similar to the renderings. Also, the Intel Xeon D, Atom C3000, and Core i3/ Xeon E-2100 packaging look quite different, as does the Ampere eMAG packaging.

To further bolster this, we noticed that Packet, about a month ago, had an AMD EPYC 3000 “Wallaby” test bed. This is the same one we used in our AMD EPYC 3251 Benchmarks and Review.

That sure looks like that AMD EPYC 3000 series Wallaby platform. BTW Sunnyvale CA build-out is looking great Packet team. https://t.co/BTMVCVGjXC — STH (@ServeTheHome) November 20, 2018

Packet did not say that it was an AMD EPYC 3000 series part, but it looks like it from the renderings and we know they have access to them.