Hi all! I apologize for not posting more often! I have been working on a lot of new, interesting things but haven’t had a whole lot of time to write about it.

You might remember my previous ESXi host build: Lenovo TS140 as ESXi and NAS box – with a twist! The transplanted Lenovo TS140 has served me well – it still performs great and didn’t break the bank. The specifications of that server are:

Lenovo TS140 mainboard

Intel E3-1246v3 CPU (3.5 GHz, 4c/8T)

32GB DDR3 ECC memory

LSI 9260-8i RAID Controller w/ BBU

8 Western Digital 4TB Red drives in RAID50 (32TB raw)

ESXi 6.0 Update 2

If you read my blog you might recall I have another lab with two Dell R710s (each with dual E5670 CPUs and 144GB of RAM) with a Dell R510 serving up 10 GbE FreeNAS/ZFS storage over NFS and iSCSI. The lab I am upgrading in this post is not that one – that one performs great and has ample capacity.

The specs above have been perfect with the exception of one area – the memory. Anyone who runs a hypervisor for testing, etc. knows that memory is everything. While 32GB of RAM may seem like a lot and while it might be a ton for a workstation or desktop, it’s not a whole lot for an ESXi host. The CPU has never been an issue since the E3-1246v3 operates at 3.5 GHz and although I over-commit CPU decently, it has never been a problem in this lab. I always run out of memory capacity or have to size things extra small and suffer with the consequences.

Because I am testing with vCenter, vSphere Replication, a domain controller, various web servers, database servers, etc., 32GB of allocation goes by quickly. In fact, for the last year or more I have only had ~2GB of RAM free on the host. Because the E3-1246v3 Xeons can only support 32GB of RAM, I am out to build a new host for as cheap as possible while providing the most RAM capacity within reason. Enter the Intel Xeon E5-2670.

Why use the E5-2670?

Supposedly Facebook.com upgraded their servers late 2015/early 2016 and a ton of the components ended up on eBay by means of various wholesalers, etc. It seems that Facebook.com was using, mostly, Intel Xeon E5-2670 CPUs which originally carried a $1,550 MSRP. This CPU offers 8 cores and 16 threads whiles supporting a maximum of 384GB of RAM each – perfect for my solution! Considering they’re a few years old and the used market is flooded with them, they were available for anywhere between $60 – $70 a piece about 8 months ago (early 2016). But, since many people caught on and started buying them up on eBay, the price climbed. Right now eBay shows them for $190-210 for a pair.

When I started my build plans I knew I wanted to use the E5-2670s and I also knew I wanted to find a motherboard that had at least 16 DIMM slots as I wanted to use 8GB DIMMs since they’re more affordable. I considered picking up an R510 (like in my DIY SAN/NAS build post) and using E5649’s (6 core 12 threads) but that would mean only having 8 DIMMs available, limiting me to 64GB of RAM (or having to find 16GB DIMMs which is too expensive). Additionally, a dual CPU R510 with 64GB of RAM would likely cost upwards of $550 – 600 if I found a good deal and I think I can build something newer with more capacity for similar or cheaper while also likely being a little quieter (since I will go with a 4U chassis). I managed to find a pair of Intel E5-2670’s on eBay for $150:

With the CPUs purchased, I can now hone in on the rest of the hardware. After researching affordable dual-socket 2011 boards, I came across the Intel S2600CP2J being sold by a company Natex.us. This board looks promising and so I decided to pick one up. There are a couple quirks – mainly there is a BIOS/config file called the FRU/SDR that needs to be tweaked in order to operate fans appropriately. However, thanks to our friends over at www.ServeTheHome.com I was able to find some information on that. The board showed up very quickly and packed very well – thanks Natex.us!

You’ll notice in the image above that there are a bunch of memory modules installed. I also picked up (16) Hynix PC3-10600R ECC server DIMM modules for a total of 128GB of memory. I considered picking up (8) 16GB DIMMs so I could expand later in the future to 256GB of memory, but I think 128GB will be more than enough and the 16GB DIMMs are just too expensive still (for lab use).

Keeping it cool

So far my plans have been going well. This was where it started to get a little tedious. Again, trying to keep everything as cheap as possible, I wanted to pick a cooler for the CPUs that would be decent yet economical. I have used the Cooler Master Hyper 212+ (and Evo 212) on another workstation build and it was more than adequate. However, I was worried it wouldn’t fit this setup. The socket/motherboard combination should fit, but because I want to put this server in a 4U chassis I would need to be mindful in terms of overall height. The Cooler Master Hyper 212+ is just too tall. A real shame, because at $29.99 each, it would be perfect for this build.

The issue is, with socket 2011 server and workstation motherboards, you need to be careful. There is a square ILM and a narrow ILM (independent loading mechanism). My board has the square type, so I can use most coolers (so long as they’re not too tall). After researching further it looked like I’d need to spend a few bucks on coolers to do this right. Most 2U and 3U coolers will sit in chassis with ducting and use relatively high RPM fans. In fact, many coolers for servers will be passive using only ducted airflow (common in 1U and 2U chassis). I wanted something quiet but that would fit in a generic 4U. The only thing that was a sure bet was the Noctua NH-U9DX i4. There are cheaper coolers, but Noctua is great quality and quiet – sure to keep things cool.

The Noctua NH-U9DX i4’s arrived well packed from Amazon. They weren’t cheap – they cost almost 3/4 the price of the CPUs at $55 each. But, they’re flexible with socket 2011 and 2011-v3 so I can reuse them in a future build. They’ll last me at least one more generation of ESXi host build.



Once removed from their boxes, I remembered why Noctua costs a few bucks more than other units. They’re made/designed in Austria and you can tell a lot of care goes into the product. Here are some images highlighting just how nice these coolers are:

In the image above, you’ll notice that the clear silicone adhesive-backed strip sticks up above the heat sink itself. This strip is to eliminate any vibration from the fans attached to the unit while running. Because the strips are adhesive-backed dust and lint tends to stick to the strip over time. Do yourself a favor and use a straight edged razor and trim the excess from the top and bottom. This will not only look better but will also keep the dust from building up.

Once installed (very simple with socket 2011/2011-v3, just apply thermal compound and tighten the screws until they stop), they look awesome on the board:

Even if you’re not into computer hardware, you have admit the two large radiators above look neat. The only steps remaining is to install the fans in the configuration you prefer and wire them to the motherboard. Here is an image of just two fans installed:

The above will work, but I’ve decided to install all four fans. I’d like to keep the CPUs as cool as possible since I am going to go with as few chassis fans as possible while also letting the motherboard control the fans. Here is all four fans installed:

If you’re very observant you’ll notice that there is a splitter being used at the fan headers on the motherboard. I’ve decided to go this route because the motherboard is very likely to control fan speed of the CPU coolers just at the “CPU Fan” headers. Alternatively you could power the fans from some of the “SYS Fan” headers, but then those fans may run at a different RPM as compared to CPU load or the main CPU fans. As mentioned earlier, there is some work to be done with the BIOS or FDU/SDR file for fan control/speed but I’ll touch on that later as I actually get the thing powered up and running.

What’s next?

Next, I need to acquire a chassis to put this stuff in. My current Lenovo TS140 system is transplanted into a Rosewill RSV-4000 4U chassis with internal storage. The reason I can’t use that is because it does not support 12″ x 13″ EEB/E-ATX boards. Drats. So, I am considering keeping it cheap with a Rosewill RSV-L4000 or potentially pick up a Norco RPC-4224. We’ll see! The next post I make will involve picking up a power supply and firing this thing up. I need to find a decent power supply that isn’t too expensive that will support the dual EPS12v/ATX12v plugs on the board.

Thanks for reading guys! Stay tuned for more cool stuff!

Share this: Twitter

Facebook

