Sometimes there is a fine line between inexpensive and cheap. At a $1,999 list price for a 48-port 25GbE switch with six 100GbE ports, one may simply not care. The Ubiquiti UniFi USW-Leaf if nothing else is extraordinary in that it is quite possibly the lowest cost switch of this type. Indeed, many switches in this class cost between 2.5x and 5x the price which makes it no less than an eyebrow-raising value proposition. In our overview (not review), we are going to explore how a switch is delivered at this price, and what trade-offs are made. Since this is an early access product, we are going to call this an overview and not a review.

Setting the context for this entire overview, Ubiquiti is not planning to have the USW-Leaf be a $10,000 offering. Instead, the product briefing says the list price is $1,999.

That list price is a big deal since it provides context for this entire overview. This is a low cost 25GbE / 100GbE switch for the masses.

Ubiquiti UniFi USW-Leaf Overview Video

Since this is a longer article, we have an accompanying overview (not review) video.

Of course, we still have what is covered in the video and more in the article below.

Ubiquiti UniFi USW-Leaf Hardware Overview

Looking at the front of the switch we have 48x 25GbE ports. These are SFP28 ports which are backward compatible with SFP+ 10GbE as well. On the right side, there are six QSSFP28 100GbE ports. This is a fairly standard configuration and we have seen over generations of 25GbE data center switches. The switches console port is a USB Type-C port which is a little different.

On the front, we see some of the first indications that this switch is going to offer a different take on data center hardware. There is a 1.3″ touchscreen LCD. On one hand, this is a nice feature. It provides some basic status views. In the future we hope it can do something more since looking at the status on a 1.3″ LCD mounted in a rack is not the most modern approach.

On the other hand, let us face it. This is designed to be a top of rack switch. Some will, of course, be mounted elsewhere but a touchscreen is fairly hard for the average person to reach when it is in the 40th-48th rack spots. It is just too high to be useful. Given the USW-Leaf’s focus on cost optimization, this choice seems out of place. Still, many of these will be mounted in smaller racks or elsewhere so this may be useful in many deployments.

That LCD touchscreen is important for another reason. First, it requires another Arm processor to run which we can find just behind the LCD.

Second, and more importantly, it takes up space on the front panel. Generally, we see 48x SFP28 and 6x QSFP28 switches have an out-of-band management Ethernet port as well as a USB port for loading firmware images. Practically, this means your management traffic is going over one of the 54 primary ports and is not going to a 1GbE management network switch. Here is an example even in a half-width Dell EMC PowerSwitch S5212F-ON that also does not have the same redundancy level due to its small form factor.

This feature does not normally show up in our data center switch reviews, but the USW-Leaf also has Bluetooth. The intent is that if you want to utilize the UniFi Network app to set up the switch you can. We have yet to get this feature working as it did not work with Pixel 3 XL or Samsung phones with Ubiquiti’s app. Still, this is undoubtedly different than what we are accustomed to in this space.

When we move to the rear, things get a little weird. There are dual power supplies, but they are fixed internal units. We see internal power supplies in half-width switches with 100GbE ports such as the Dell EMC S4112-ON and Dell EMC PowerSwitch S5212F-ON mostly due to the compact footprint. Internal fixed power supplies are also common on lower-end switches, often seen in the sub-$1000 space. They are far less common in this type of data center switch.

By the same token, the fans are not hot-swappable. Fans have become extremely reliable as we did a piece on not too long ago. Even hyper-scale data centers only have a handful of spares. (See: Are Hot-Swap Fans in Servers Still Required.)

Inside the switch we see the two 350W Mean Well power supplies. These are actually fairly nice units for a cost-optimized switch. We sometimes see vendors try to make their own with varying degrees of success at the lower-cost segment.

An interesting nuance to this design is how power is sensed. Since these are not more standard data center PSUs, there is limited communication between the PSU and the switch. That can have a profound impact on operations. The reason is simple, if you unplug a single PSU, you get an error message that displays on the LCD that the PSU has failed and to contact support. The switch motherboard is just looking for power, not seeing any and assumes the PSU has failed. More standard hot-swappable data center PSUs can tell the difference between PSU failure and the AC input being unavailable. These small design decisions can have an impact on operations.

There is a 30GB M.2 SSD powered by a Silicon Motion controller. This is fairly standard to see a low-capacity SSD in switches. The fact that the drive is an M.2 form factor means that one could swap it if necessary.

When it comes to the switch ASIC and management controller, we see an intriguing paring.

The main switch ASIC is really interesting. It is a 55W Nephos Taurus 1.8T unit. We have heard of a few other vendors building around this chip such as Lite-On but this is the first time we have seen one in the wild. The primary switch chip most designs have used to date is from the Broadcom Tomahawk line so this is certainly a departure in the 25GbE space. While you likely have heard of Broadcom, you may not have heard of Nephos. They are a subsidiary of MediaTek that is focused on cost-optimized switch chips.

The CPU is listed as a quad-core Arm Cortex A57. As we tore the switch apart, we found something else interesting: this is an Annapurna Labs chip. The AL-324 is running at 1.7GHz. This same CPU is commonly found in some of the lower-end QNAP NAS units. For those who have never heard of Annapurna Labs, since it has been years since our Gigabyte Annapurna Labs piece, the company is owned by Amazon and is credited with chips such as the AWS Graviton and Graviton2 designs. The AL-324 more resembles the older market focus we saw in that Gigabyte piece and with the QNAP systems. Still, it is notable that Annapurna Labs is still selling chips outside of AWS.

There is one other small item we wanted to point out and that is internal cable management. For some reason, Ubiquiti decided it was best to utilize a system of taping loose cable ends to keep them secure. If you have a 4-pin fan that needs replacement, you would need to lift the tape, replace the fan, then likely find tape elsewhere. This is not necessarily a bad system, it is just that when you open data center gear every week from dozens of other vendors where there are either chassis tie-downs, zip ties, or both, this is just different.

They just make life easier when installing gear. We wanted to also point out that the packaging for the USW-Leaf is really nice. It feels more like a consumer product like a phone or a notebook rather than functional data center packaging. Looking at the accessories we can see examples:

One feature we like is that Ubiquiti includes 1U rack mounting rails. These are functional. Of course, one feature we like on many of our Dell switches and just about the entirety of the mainstream server market is the ability to use tool-less rails.

One will notice that beyond the inclusion of the 25GbE SFP28 DAC, there are a number of accessories. Ubiquiti took the angle of individually wrapping each rack component. We left one piece wrapped in the photo just to show this level of detail. This keeps the rails looking nice in shipping. At the same time, they will never be seen in a rack and it takes a lot of time to un-do everything. We unboxed two 100GbE switches from other vendors and one 54 port switch from a third, and it took several minutes longer to get the Ubiquiti unit out and ready.

Hardware such as the cage nuts and screws were placed in a carrier. It looks impressive. Undoubtedly this will be a YouTube darling. At the same time, this again takes longer than the industry standard zip lock bag to get components out of.

Then we get to the worst part of the switch experience, and we are saying this even though we could not get the unit to sync to UniFi controllers, a topic we will cover in the management discussion. Instead, the worst part of the switch were the cage covers. Virtually the entire industry uses covers that have small knobs on them. That makes the SFP28 and QSFP28 covers easy to remove even if a switch is mounted 6 feet (~1.8 meters) off of the ground, with optics and DACs already installed. Instead of just using these standard covers, Ubiquiti went with branded covers putting the logo where the knob normally is.

To remove the covers, one can reach a fingernail or something into the top and bottom and pull out. In a top of rack position when the cages around this are filled, good luck getting these out. This looks good but is a completely non-functional departure from the industry standard. On small switches, this is less of an issue. These can work fine on Ubiquiti’s lower-end 8-port switches. On a large TOR switch like this, you are going to want to remove them before installation. Even removing them on a workbench took several minutes longer than it does removing the industry-standard knobbed covers.

Overall, the packaging effort was top-notch and beautiful. There will be those who unbox this on video and will “wow” with the slickness. At the same time, for a data center product, it takes longer to unpack and prepare. While the switch is less expensive than many of its peers, all of this extra packaging means it took us, 15-20 minutes longer to get the USW-Leaf prepared for a rack versus the three other data center switches we did on the same day. For a data center product, we would gladly trade this nice packaging for some of the missing features such as hot-swap fans and power supplies or even an out-of-band management port.

Ubiquiti UniFi USW-Leaf Management

There are two options for management. First, there is a CLI. This not as robust as we see many other solutions, but for the first effort in this class, it is something to build on. We are simply going to link the CLI guide in this article. If you look at what Nephos documentation is out there, it seems like Ubiquiti has a software base they can build on just some features are not implemented yet. That is absolutely fine. There are features even in the CLI such as multi-chassis LAG and well as TFTP boot that are missing. We know this is the early days of the switch so we are going to give it a pass. If you want a production-ready CLI experience, in early 2020 this is not ready yet.

The other option is one can use Ubiquiti’s UniFi controller software. For many of our readers, the idea of having your data center top-of-rack switch managed by the same environment as your Wi-Fi APs will seem different to say the least. This is especially since there is no out-of-band management port you may use for some segregation.

We first tried getting the UniFi controller setup on a server with Mellanox, Intel, Broadcom, and QLogic 25GbE NIC ports. We could ping and access the interface on the simple network with only the server and the switch, but we could not get the USW-Leaf to be discovered by the controller.

We then brought it to a simple existing environment with two UniFi APs and a controller. This again is a completely flat network and we know AP discovery works without issue. Here again, we could not get the USW-Leaf to connect.

We cycled through 25GbE ports. We tried different NICs. We could adopt the switch. We could SSH into the switch. The switch could see the controller, and even trying manual inform commands via SSH did not work.

This is an early overview. Undoubtedly this will work in the future since that is Ubiquiti’s design.

We were clearly missing something, however, if you are going to have a WebUI anyway, why not run the HTTP server on the Annapurna Labs CPU like other data center switches that have WebUIs like the Dell X1052 and QCT QuantaMesh T1048-LY4A.

This is an area where we both hope and expect Ubiquiti will innovate.

Next, we will continue our overview with some additional observations as well as our feedback summary.