About a month ago I picked up a Cisco UCS 6140xp for cheap on ebay to play around with 10Gb networking. By a complete accident I also managed to buy a 6120xp for next to nothing thanks to winning a bid on ebay without any competition. So I figure I might as well give a review of the two of them.

The Cisco UCS 6100 series Fabric Interconnects are a part of Cisco’s unified datacenter model that allows servers, networks, and storage to managed from a central point making it easy to provision new equipment. Or at least I think. I was interested in them because they have a large number of 10Gb network ports and I had a good deal on them. Each system is essentially the same system (and both are essentially Nexus N5Ks but in green and with different firmware) but there are a few key details between the two of them.

The UCS 6120xp

The 6120xp is a 1U system with 20 SFP+ ports capable of ethernet, fiberchannel, or FCoE. It also has 1 expansion slot for additional ethernet or fiberchannel ports. It sports two hot swap fan modules (1+1 redundant) and two 550W hot swap power supplies (1+1 redundant). With nothing else running in my lab the noise level sits roughly around 60dB.

The UCS 6140xp

The 6140xp is a 2U system with 40 SFP+ ports and 2 expansion slots. It’s sporting 5 hot swap fan modules (4+1 redundant) and two 750W hot swap power supplies (1+1 redundant). The noise level of the 6140xp is very close to that of the 6120xp but sits roughly 1dB below.

Observations

These things are LOUD. They are meant to live in datacenters where people aren’t usually trying to sleep, so in an open room in my little apartment they are too loud. Stowing them in a closet with the rest of my primary systems brings the noise level down quite a bit, but I wouldn’t want to run them 24/7 without doing some sound proofing on that room. They also take a long time to fully initialize and begin switching packets. In fact it seems as if it takes 5 minutes or more for one of these to fully start up, although I haven’t timed it yet.

The management functionality isn’t too terrible on these, but the CLI procedures differ quite a bit from that of the Catalyst series switches. With a little learning curve they can be configured pretty easily from the management console, but they also offer a nice remote management software package as well. The management software does rely on Java to run, but so far doesn’t require ancient versions of Java to run.

As far a SFP+ modules go these systems seem to be a bit picky and favor Cisco modules only (though I only have Cisco and Finisar to test). Unfortunately there doesn’t seem to be a way to enable unsupported modules in UCSM unlike in Nexus OS. They do seem to work well with the Cisco compatible modules from FS.com.

Conclusion

Overall I do like these systems. They look really good and will allow me to start building my 10Gb network on the cheap. But for most of you building home labs these will not be for you. They are big (as long as a full length server), they are loud, and they take as much power as a full server system. If you have a lot of other UCS equipment then one of these systems may be just what you’re looking for, but if it’s just 10Gb you want I would highly suggest looking in another direction. On eBay a UCS 6120xp can go for anywhere from $100 to $300 (and more of course), but I believe there may be better options for more constrained home labs. Unfortunately I can’t recommend any by name at this point, but I’m sure there must be a better fit.

All in all if you want high density 10Gb and can stand then noise then by all means a Cisco UCS system will serve you well! But for smaller setups I wouldn’t suggest it at all.

P.S.

I haven’t had a chance to do any throughput testing on either of these things yet. Currently I only have one 10Gb NIC in the whole lab so that’s not very useful. I may try and do some ESXi trickery to do a speed test (its a two port card so I should be able to attach one VM to each port and test it that way). Will report back with more info.