Decision-Making Process (Hardware):

I am sorry for those who were waiting on the step-by-step build guide. I felt that it was important to share my decision-making process. I get questions on why I choose certain things over others. Or why I did it this way and not the other. These are important questions that determine the outcome of the design. I thought I should answer these questions before I dive into the nitty-gritty details. I know all the techies (including me) like to jump into the good stuff but hear me out!

So why did I build it? My purpose was to build a lab that I can use at home and have the ability to take it with me to customers for a live demo. I want to demonstrate Cloud automation, stretched clustering, monitoring, DR, containers, and other cool VMware solutions. In the past, I’ve seen people load up ESXi on the laptops with custom drivers. I thought about doing this too but laptops have limited RAM and didn’t have the cool factor.

Why don’t I just do the software demo over the Internet? I’ve done demos over the Internet and so have millions (I made this number up) of others. It works well most of the time but I wanted to do something different. If you’ve done demos, you know that the Internet connection is not always reliable. At some point, you probably apologized to customers repeatedly for slow speed. Sometimes, abandon the demo all together and whip out PowerPoint slides. Having a demo unit with you guarantees you end to end control. Also, it’s always cool to have something to look and touch, bring a different perspective to the table on what SDDC is and can be. And no… I don’t carry this around everywhere I go.

The Case:

The case determined form-factor of the SDDC box. I needed to pick and choose equipment based on the case size and cram everything in it. Also, the case needs to confirm airline carry-on luggage size limit. SKB had 4U travel case for audio equipment that fit the bill. As I explained on my previous post, the audio gear rack mount points are compatible with server rack mounts. I would require 2U for four E200-8D servers, 1U for the switch, and 1U for the patch panel. In the back, 3U for three 120mm exhaust case fans and 1U for the PDU.

Server:

After researching reviews and videos, I decided to go with Supermicro E200-8D server. The server has five Ethernet ports: two 10GbE, two 1GbE, and one management port. It comes with Hyper-threaded 6 core Xeon processor and can have up to 128GB of RAM. vSphere 6.5/6.7 will work with this server on a fresh install, without needing to have custom drivers. Also, the size was just right to fit four of these servers into 2U space in the SDDC case. Please note that the server is not on VMware HCL.

Supermicro sells dual rack mount kit for E200-8D. Dual mount kits are expensive and it won’t fit into the SKB case because it is too long. If you are not taking these servers on the road and want to mount them in your rack, the rack kit may be a good choice. I found cheap 1U 8″ deep shelf with NO LIP. Most of the shelves have lips in front or the back to make the shelf more rigid. If you get these shelves with lips, chances are, you will not be able to fit all the gear in 4U case. I know this because I bought them and had to return them. I’ll talk about how to secure the server on the shelf on next post.

Network:

I wanted the full 10GbE throughput on a smart switch that supports creating VLANs and trunks. There are only few budget 10GbE switch vendors out there and Buffalo 12-Port 10Gbe managed switch stood out. It was the only managed 10GbE switch that had all Ethernet-based ports and it was cheap! Since E200-8d has 10GbE ethernet ports, it was a match made in heaven. The switch came with a rack mounting kit, which was great. Going copper based 10GbE is a lot cheaper than the Twinax or fiber with SFPs.

Since I was building lab/demo case for travel, I wanted everything wired up. Sad thing was I have total 20-ports and my 10GbE switch only has 12 ports. The solution was to add CAT6a patch panel and have everything wired to it. I was short on time to prepare the box for the customer demo. Instead of punch down the cables on the patch panel, I opted for a more expensive ready-made, shielded plug-in patch panel. If I do it again, I may opt for punching my own cables on the patch panel.

Real estate inside the case is valuable. Every millimeter counts and I couldn’t use thick and heavy shielded CAT6a cables. I had to look for thinnest and lightest CAT6a cable I could find. I noticed a couple of thin cables on Tinkertry.com. I googled and found them on Amazon.com. I picked up twenty 3 feet cables for internal cabling.

Last but not least, I wanted a small router for WIFI, routing, VPN, and internet gateway. This router would route traffic between the physical network and NSX Edge router in my SDDC box. In other words, the router will handle North-South traffic and NSX will handle all the East-West traffic. I searched for routers that were DD-WRT or Open-WRT based and in small sizes. I wanted SDDC box to connect out to the internet over my phone hotspot or any other internet connections out there. I found Mango router (OPEN-WRT) on Amazon that had all the features I wanted plus more. I’ve been using DD-WRT at home for a very long time. I am a huge fan of it. I buy consumer routers off the shelf and flash them with DD-WRT and it becomes super juiced-up-router with all the bells and whistles.

Storage:

I knew from the beginning, I was going to do All-Flash vSAN. E200-8D has limited* internal space for one SSD and one NVMe drives (I didn’t know about PCI-E 3X NVMe 1U adaptor at the time of the build). I decided to use 250GB NVMe for vSAN cache tier and SATA 1TB SSD for data tier. With vSAN Erasure Coding RAID-5 and Dedupe/Compression, It should be enough space for what I need. The sad thing about E200-8D is that the controller only has 32 queue depth. Read why this could be a problem on “Why queue depth matters!” at yellowbricks.com.

I am in the process of upgrading the vSAN to All-NVMe based. NVMe drive is extremely fast and unlocks up to 65536 queues, and up to 65536 commands per queue. It is so fast that it can take full advantage of 10GbE pipe and saturate it. Read about the benefits of running NVMe drives on vSAN on VMware blogs. I’ll be using spare 1TB SATA drives as VMFS datastore for nested vSAN. Nested vSAN does not work with vSAN, which I found out on my old home vSAN lab. Here’s the helpful blog link: thinkcharles.net.

For the ESXi boot, I decided to go with USB3 low profile thumb drive. Why not SATADOM? Because USB3 thumb drives are cheaper and they are pretty fast.

Cooling:

I live in a small apartment in DC metro area. My home office (den) is open and adjacent to the living room. I can’t have loud fan noise. It would drive my wife crazy. E200-8D is not a quiet server by any means. Based on reviews and research, I decided to go with Noctua fans. These are expensive little buggers but they are well worth it. Since they spin at a slower speed, I had to make sure I get enough cooling by adding a 3rd fan into the server. To make sure all the hot air gets pumped out, I chose the 3U fan panel with 120mm punches for three Noctua fans. I couldn’t find any 3U three 120mm fan panel that would fit the case. I ended up building my own. I actually like the look.

Power:

I didn’t want all the power cables hanging out of the box. I wanted to have an “appliance” feel with a power cable and a switch. I looked for rack mount 1U PDUs that had a switch and outlets on both inside and outside. Inside outlet to route all power cables internally and outside outlets to power any other devices and gadgets. Server and power cables are too long to fit into the case. I needed shorter cables so I went with 1 ft. server power cables and 3 ft Right-angle power for the switch. I think I could have gotten away with 2 feet for the switch.