I’ve posted a few snippets detailing elements of my home lab datacenter previously, but I thought it was time I did a full, in-depth write up, so here goes.

First off, let’s deal with the elephant in the room. Why build a homelab? To answer, this, I’m simply going to paraphrase an article I wrote for /r/homelab which was eventually merged into the wiki introduction.

Why build a homelab?

The answer is easy: to learn. IT professionals, amateurs, and people who just really like computers use homelabs for experimenting. It’s a sandbox environment where if you break it, you fix it, and more importantly isn’t costing money while it’s down.

Homelab [hom-læb](n): a laboratory of (usually slightly outdated) awesome in the domicile

Some uses for a lab

Self hosting – Host popular services on your own hardware. Just for the fun of it.

Game servers – Host a Minecraft server to play privately with friends or because you enjoy playing god.

Media servers – Multi-room streaming or a centralised location for your music and movies

Storage – Archives, backups, centralised storage.

Web hosting – Host websites for friends or family for free.

Certifications – Certifications are a great way to make your CV stand out over other candidates. It’s possible to get your CCNA or VCP without a lab but hands on practice will be a lot more enjoyable.

Virtualisation – The fundamental OS in many homelabs is a hypervisor. They allow budding sysadmins to setup nested, throwaway environments starting on just one piece of hardware (it doesn’t even need to be an enterprise grade server)

Labs are largely used for experimentation before rolling stuff into a production environment, learning and/or practice involving all of the above plus much more. They’re fun. They’re expensive. They’re a hobby. Ultimately, for most people, a homelab is a plaything that occasionally gets out of hand.

Hardware

With that in mind, what am I running these days? Over the past few years I’ve had my fair share of hardware- literally everything from a single HP Microserver to Dell R710s with many combinations in between. Around a year ago though, I moved into an apartment with literally no spare storage space. If I was going to continue to run a lab it would have to reside in my living room or my bedroom. It had to be quiet but also powerful. Many people ask how this is possible and unfortunately the answer lies in the trade-off triangle- cheap, quiet, powerful: pick two. But more on that later.

Building each node from parts

Rack mounted gaming PC

Which runs to a TV

Front of rack (top to bottom)

Virgin Media Arris router in modem mode (200Mbit)

Cisco SG300-28 switch

Startech cable management

Supermicro 502L-200B chassis, X7SPA-H mobo, 2GB RAM, 120GB Samsung Evo

(3x) Supermicro SC505-203B chassis, A1SRi-2758F mobo, 32GB RAM, 120GB Samsung Evo, 500GB WD Black (x2), LSI 9207-8i HBA

Synology RS214 NAS with 2TB WD Red (x2) in RAID 1

Whitebox gaming PC in Logic Case SC-34390 with Intel i7-4770, 12GB DDR3 RAM, EVGA GTX750ti, Samsung 850 Pro 512GB SSD, Antec VP400 PSU, Arctic F8 Pro fans

APC SmartUPS 750 with APC 9630 NMC

Skewed rack does the job

Replacing stock fans in the RS214 with super silent Noctuas

Rear of rack (top to bottom)

Kenable UK plug PDU

1U cable brush

Kenable C13 PDU

3U plate with 2x Noctua NF-S12A FLX 120mm fans

2x 2U blanking plates

Perspex sheets cover holes in the side of the rack

Then 120mm fans at the back of the rack help keep things cool

Meraki MR18 WAP provides Wi-Fi

Collectively, I have access to around 62GHz of computing power, 98GB RAM and 7.5TB of raw storage. It draws around 150 watts at normal load (excluding the gaming PC).

Operating Systems

Supermicro 502L-200B – dedicated Untangle UTM

Supermicro SC505-203B – ESXi VSAN cluster

Logic Case SC-34390 – Windows 7

Synology RS214 – Synology DSM 6

Storage

Supermicro 502L-200B has a 120GB Samsung EVO SSD I had lying about. Exciting stuff.

The three Supermicro SC505-203B chassis are where things get interesting. Each uses a Toshiba 8GB USB2 memory drive as an ESXi boot disk. Each chassis houses a 120GB Samsung EVO SSD, 2x Western Digital 500GB Black HDDs and an LSI 9207-8i HBA and contributes storage to a VMware VSAN cluster. If you’re not aware of VSAN, in short, it is the king of software defined storage. Disks are presented to the OS which through all kinds of wizardry and witchcraft, enables the use of storage policies as required, ludicrously simple scaling and integrates amazingly well as the storage platform for a vSphere environment. I store anything not mentioned in the NAS section below on a VSAN datastore.

Synology RS214 NAS contains 2 x 2TB Western Digital Reds in RAID 1, and presents various NFS shares to Linux VMs, iSCSI storage to ESXi for all logging appliances and vCenter (slower performance but VSAN is a little tricky to resurrect via command line without vCenter) and SMB shares to Windows Server.

My other half was a little annoyed at how much time I spent on this so got to name the VSAN datastore

Networking and Security

All traffic is routed through Untangle before hitting the modem. There are eleven subnets, grouped by purpose which are firewalled (by port, protocol, source and destination address- no messing about here), run through QoS, virus scanning, spam detection, phishing detection, ad blocker and intrusion detection. It also runs OpenVPN. All traffic destined for my webstack is routed through Cloudflare before hitting my public IP. I run the Untangle Home licence and Cloudflare free plan. 10/10, would do again.

The Cisco SG300 is admittedly, underused. I have setup VLANs but otherwise, all routing and L3 functionality is achieved through Untangle.

In addition to heavily restricted firewalls in Untangle, each Windows Server and Linux VM runs (at least) advfirewall and UFW respectively.

Software

My lab is ever changing (hey, that’s what it’s for, right?) but I currently have 18 virtual machines powered on 24/7 across 4 resource pools and 1 vAPP.

VMware Horizon allows loading Windows 7 desktops on almost any web enabled device, from anywhere

Resource Pools

Dev – custom very low shares

Management – normal shares

TCU – webstack vAPP – normal shares

VDI Clients – custom very low shares

WS Cluster – low shares

These pools help ensure that the VMs I use regularly get the resources they need, and that those playing a ‘supporting role’ don’t.

Virtual Machines

Windows Server 2012 R2

Management – GUI install – Windows Server management tools, Veeam, SQL management studio, vSphere client, WSUS

ad1 – core install – Active Directory Domain Controller

horsec1 – core install – VMware Horizon Security Server

sql1 – core install – dedicated SQL Server

ws1 – core install – VMware Horizon Connection Server

ws2 – core install – VMware Horizon Composer, Windows File Services

Windows 7

Windows 7 Horizon template

Windows 7 Horizon snapshot

5x Windows 7 Horizon VMs in automated cloned pool

Linux (a combination of Ubuntu, Debian and CentOS)

duo1 – Duo Security server providing 2FA

ns1 – DNS server running on PowerDNS

lb1 – HAProxy provides load balancing for nearly all services and Varnish caches web content for my LEMP stack

mysql1 – MySQL server with Redis caching

web1 – Nginx

mail1 – SMTP server with Postfix

monitor1 – An experimental VM running InfluxDB, Grafana and a script I wrote to pull data from IPMI

Various Appliances

VMware vCenter Server Appliance 6.5

VMware vRealize Orchestrator

VMware vRealize Log Insight

SexiGraf

A home-rolled script pulls data from IPMI into Grafana

Cost

This is the big one. As mentioned earlier the trade-off triangle should be referenced before considering a project such as this- cheap, quiet, powerful: pick two. My requirements for this lab were that it must be near silent but powerful enough to run, well, basically everything I’ve listed above. The rack sits in my living room, about 1 meter from my sofa, and manages to not intrude. Requirement 1 – check. While it isn’t the fastest lab I’ve used, it runs all my stuff, so I’m ticking off point 2 as well, despite what you (yes, you with your absurd 20 thread Xeons that I’m totally not jealous of) may think.

I decided to do an honest and thorough comparison of the cost of my lab to the same rack but with second hand Dell R710s, despite the fact that I could never run them without being bludgeoned to death by my housemate. Okay, OKAY, 3 R710s provide a lot more power than my 3 Atom boxes. That isn’t the point. R710s are the homelabber’s best friend. When you want a family dog, you look for a Golden Retriever. When you want a homelab server you look for some preloved R710s. As for the specific count of three servers, this is because three is the magic number of servers (generally, there are ways around this but that’s another topic) required to run a VSAN cluster, which was the main point of building this lab in the first place.

Clarifications Rambling over, to the numbers!

Supermicro Build Supermicro CSE-505-203B (x3) £325.00 Supermicro MBD-A1SRi-2758F-O (x3) £900.00 Kingston 8GB ECC SO-DIMM (x12) £480.00 Fans, brackets, PCI extension cards etc £305.00 £2,010.00 Dell Build Dell 710 8 core, 32GB RAM (x3) £900.00 Communal Parts LSI SAS 9207-8i (x3) £360.00 Western Digital Black 500GB (x6) £210.00 Samsung EVO 120GB (x3) £150.00 Startech SAS cables + SATA power splitters (x3) £40.00 Startech 12u rack £140.00 Startech 1u shelf £20.00 Cisco SG300-28 £160.00 Supermicro X7SPA £120.00 Blanking plates £10.00 Synology RS214 2 x 2TB £370.00 APC AP9630 £45.00 APC SMT750RMI2U £285.00 Startech 1u cable management £20.00 Kenable PDU (x2) £50.00 Power cables, ethernet cables, velcro £25.00 £2,005.00 Electricity (cost per annum) Supermicro rack (@ 0.150kWh) £200.00 Dell rack (@ 0.550kWh) £770.00 Total Cost of Ownership Supermicro build – end of year 1 £4,215.00 Dell build – end of year 1 £3,675.00 Supermicro build – end of year 2 £4,415.00 Dell build – end of year 2 £4,445.00 Supermicro build – end of year 3 £4,615.00 Dell build – end of year 3 £5,215.00 Supermicro build – end of year 5 £5,015.00 Dell build – end of year 5 £6,755.00

I didn’t build this rack in an attempt to save money (it was strictly about noise) but it’s nice to know that in a few years I’ll be £1745 up too!

FAQs

Why didn’t you use Xeon D boards? Is there a reason not to?

Money. Pure and simple. If money was no object I’d have bought some Xeon D-1541 boards.

What would you do differently if you were to do this again?

Technically, nothing. I would like better and/or more disks for VSAN. I say nothing as I had the WD Blacks lying about so it’s not like I wasted money on the wrong hardware. If you want to replicate my setup or build something similar- better and/or more HDDs/SSDs should up performance significantly.

What’s the performance like?

This is a tough one to answer. It depends on your expectations and the workload. My lab runs at perfectly bearable speeds but doesn’t have a particularly high workload. vCenter is quite slow but has improved massively since v6.5 was released (mmmm, yummy HTML client). The Atom CPUs are surprisingly very capable and not a bottleneck yet. Storage is without doubt my weak spot.

What are your future plans?

Special, ridiculous and completely unnecessary things. I intend to add a few more nodes sometime this year and will be immediately and permanently spinning up NSX (I have played with it but just don’t have the resources run it 24/7). My lab has been built with everything ready to be made redundant and that wasn’t an accident- services such as AD and my webstack will hopefully be scaled out to cloud soon. No single points of failure for me.