My NSC-810 / Denverton build log / thoughts (with photos)



It seems that nobody else has described their experience, so I'll describe mine.

Parts list

Reasons and choices

Originally I had ES34069 4-bay case with an old Westmere-based system, running Windows Server 2008 R2 with Windows Home Server v1 inside Hyper-V, topped at 8GB RAM, and it was beginning to show its age. I wanted to replace it with a newer system: preferably with more HDD bays, definitely with support for more RAM (and ECC, while we're at this), with relatively high performance, so that I could finally compartmentalize all components of my home system into their separate VMs, and also migrate a dedicated server for my pet projects (~$45/month at Hetzner) to my home. Oh, and we were also using that ES34069 system as HTPC, so the new system should have at least decent integrated GPU with HDMI output (with the perspective of upgrading to support 4k/60fps video).

Our previous ES34069 build





At the same time, new server needed to be compact and quiet; it was going to be located in the same 60(L)x40(D)x38(H)cm (23.5x15x15in) Ikea Besta TV unit behind the closed door as the old PC did. The size constraints alone ruled out pretty much every 5+bay case except for U-NAS ones.

On the other hand, I was building this new server to last for a long time, so limiting myself by 64GB RAM from the start was out of question.

This did not leave me much options. The only systems with >64GB ECC RAM support at the moment were AMD EPYC (which did not even come in micro-ATX format), Skylake-X and various server-grade (i.e. excluding Xeon E3) Intel offerings. Skylake-X and Skylake-W Xeons were immediately ruled out by their insane power requirements (I doubt these 120-180W would have any chance of dissipating in a small closed cabinet, much of which was going to be occupied by the server itself), Broadwell-DE (Xeon D) seemed to be pretty much outdated, Skylake-D was not on horizon at the moment, so in the end I was choosing between Skylake-SP and Denverton.

Initially, Skylake-SP seemed to be a better choice, with its 768GB RAM support, (almost) all the newest tech (such as AVX2, or 10GbE option), and upgradeability. After carefully comparing these, however, I've came to a conclusion that all the additional small expenses make even the cheapest Skylake-SP system to cost the same as 16-core Denverton-based one (Denverton CPU+MB with 4xLAN: $696; Xeon Bronze 3104 + cheapest motheboard with 2xLAN + heatsink/fan + 2x PCI-E risers + 2xLAN NIC + NSC-810A/810 difference in price = $220+$400+$50+$20+$40+$10=$740), while being ~2x slower (if servethehome tests are to be believed). And Xeon Silver 4108 based config, while offering performance similar to 16-core Denverton, would cost $950 at least. Not to mention that cooling 85W CPU with 6cm fan in such a crowded space with really restricted airflow would not be a good idea.

So in the end, I've settled on 16-core denverton / NSC-810 combo.

UPD: Since then, Skylake-D came out and AMD EPYC Embedded were announced. From Skylake-D test results, it seems that in non-AVX-optimized scenarios, one Skylake-D core is worth a bit more than two Denverton cores; so I'd need 8-core Skylake-D to match this 16-core performance (with Skylake being 20-30% faster, and several times faster on loas optimized for AVX). However, Skylake-D motherboards are much more expensive; 8-core Mini-ITX motherboards are well north of $1000, and existing Supermicro motherboards are worse than my Denverton one, with their 2xNIC and 4xSATA+Oculink and lack of M.2 slot; not to mention that Skylake-D are quite hot; and then there is that quagmire of Spectre and Meltdown and Intel's reaction to these...

EPYC Embedded, on the other hand, is a totally different beast. Its 12C/12T CPUs are cheaper than 8C/16T Skylake-D (and there is a hope that motherboards will be cheap, too); it's much better on connectivity; it should be much faster... If I were building this system now and not in November, I'd wait for EPYC mini-ITX motherboards (or maybe micro-ATX with NSC-810A).





Purchasing

Originally I have planned to get all that hardware from different suppliers at once, at the start of December. It was not so easy as it seemed.

First, I've tried to source the motherboard. From November 3rd, I was emailing a huge lot of stores, and neither had the motherboard in stock or offered international shipping (besides several EU-based ones which would not offer an option of not paying EU VAT, so that I would have to pay ~$1000 for the motherboard). Finally, WiredZone did answer that they were expecting motherboard shortly, and were willing to do international shipping. I was thinking of getting RAM and SSD from them as well, but their stock availability is a mess, all international ordering can only be done by email, and you have to wait a couple of days for every response. In the end, two weeks later, they have said that motherboard is in stock now, and that USPS shipping will be $150 for the motherboard alone (3x the price on USPS website), and that they still have to check availability for that other hardware, and that I should just send them money via Paypal (as in, "send money to a friend/relative"). I've decided to skip it and simply created a new order on website for motherboard alone, with US package forwarder shipping address, and paid for it with conventional paypal payment. An odd thing is that all that time the motherboard was listed as "out of stock", and I don't think I've ever saw it as "in stock" there - go figure. So I've paid for it on November 24th, they have shipped it on 28th, and it arrived to the package forwarder on December 05th, more than a month after I first contacted them. That was a terrible experience. Package forwarder, on the other hand, processed everything fast and smooth, and the package arrived to Russia on New Year's Eve, just for $30 (insurance and forwarder fee included).

While all this was going, I've contacted U-NAS seller; they had special offer going at the moment on taobao, bundling free Seasonic PSU and custom power cables with every NSC-810 purchase. I've decided that it's better than ordering the case on u-nas.com (which did not feature that PSU offer, and which would involve a long shipping route China-US-Russia). Despite taobao being targeted at the chinese internal market, everything went just great; U-NAS folks offer excellent support in English, and we settled the international shipping matters without a problem. We've also discussed that Denverton motherboard compatibility, and although it is not a standard option, they've offered to fit the case with 2xSAS cables (sourced somewhere else) instead of 8xSATA. I've ordered it on November 15th, they've shipped it on November 20th (as I understand, they build every case individually), and I've received it on November 28th. It was super-fast - even too fast; I only had an opportunity to assemble it 6 weeks later.

Package from China!

Case is quite heavy, at 6.6 kilos with PSU

I've tried to make as much cable ends visible in this mess as I could

Nuts and bolts coming with NSC-810





When it was clear that I'll only purchase motherboard on wiredzone, I've ordered the rest on computeruniverse. However, at this moment pre-NY high season was at full swing, and there was a mess at computeruniverse, too. On November 24th, I've made an order for everything that was in stock, and then a couple of days later a better item appeared in stock, and I wanted to change my order, and that was a mistake. Computeruniverse email support is shit. You can spent two months going with them in circles, and reach nowhere. Phone support, on the other hand, is great, and immediately solves all problems; but it was not until December 27th (and extra $600 frozen on my account for another month) that they have shipped it.

In the end, I have received it all on January 9th, and immediately started to build a system.

Some extra hardware on this photo

Building

I've realised the first two mistakes immediately.

1) Motherboard came in a box with two SFF-8643 to 4xSATA cables. I should not have asked the case seller to fit the case with these cables. Now I have troubled them, have paid extra for these cables, and now I have four of these (although I only need two), and none of these slim SATA cables in case I will one day upgrade to the motherboard with 8xSATA ports. Additionally, it is worth noting that on the back of backplane, there are 8xSATA ports as well (I have thought that there will be 2xSAS instead).

2) Although there are 4xMolex power connectors on a backplane, these are already wired into 2xMolex connectors for a power supply. So you don't actually need a custom PSU; any FlexATX PSU with 2xMolex connectors will do.

And another huge problem is the fan. I have calculated the case clearance, the RAM height, and it seemed that there should be around 20mm of clearance between the RAM and HDD cage. Wrong. It is 15-16mm for the most part, but there are some parts sticking out of HDD cage, and the edge of backplane is sticking out too, reducing the clearance in these parts to 14mm. And there is no way of installing 12cm fan so that it will not cover the RAM (especially if you have some PCI-E card installed). It is a shame, actually, because these unneccessary sticking parts prevent you from using a large quiet fan to effectively cool a modern system with DDR4. If only these parts were not sticking out; or if only the case was 2mm wider... As it is now, I'd say that even 9cm*14mm fan cannot be used with this motherboard (it would be too thick to fit between RAM and HDD cage; and it would be too wide to fit between RAM and GPU).

In the end, I have placed the fan on the top of the HDD cage. It will probably stir an airflow going frow the rear case fans so that it has a better probability of getting to the CPU, and will also somewhat cool the GPU; anyway I don't have any other uses for this fan.

There is also another small problem with the power cables: ATX cable is so rigid, and the clearance between motherboard and HDD cage is so small, that you have to apply a considerate force when installing the motherboard. Additionally, with such an odd case layout (motherboard is attached to the case from the bottom, not from the top), it is impossible to install a motherboard with all the cables already attached; you have to detach 4-pin ATX cable first, then secure the motherboard in place, and then attach it again - which proves to be quite a feat in such a small space (you'll need thin and long fingers for that).

Additional oddity is that there are 2xUSB2.0 headers and 2xdual-port USB3.0 headers coming out from the front panel. Why did U-NAS made it that way, when there are definitely no 2xUSB2.0 and 4xUSB3.0 ports on the front panel, is a mystery for me.

With all things else, the build went smooth.

I somehow lost all other photos taken during the build process. This new NAS should prevent this from happening :)

Here everything is fixed in place, connected and running; it's final build, right before closing the case.

You can see 12cm fan simply lying on top of HDD cage; it's not even horizontal, because there is 2.5" bay.





Configuring

Booting up the system takes several minutes; you see various pre-BIOS debug messages during that time. I have no idea why it takes so long.

IPMI works fine; I only needed to connect the display once to configure IPMI IP/network settings.

Fans could only be configured in IPMI interface. There are four system-wide options like "Full power", "Optimized", "Silent" and "Storage". In "Full power", all fans are blowing at full speed regardless of thermal readings, and it is quite noisy and unnecessary powerful. "Silent mode" behaves weird; when I turned it on, fans were running at the minimum speed for a minute, then sped up to the full speed for a second, then slowly slowed down back to the minimum speed. Based on what I found in the internet, minimum RPMs for these fans are so low that motherboard thinks they have stopped entirely and that it should reset all fans. With my build in the cabinet, running some load, it never seems to get cool enough for fans to run at this super-low speed, so I do not experience this problem at the moment.

I was going to run a bunch of virtual machines on it: pfSense routers, web server migrated from the dedicated server at Hetzner, NAS server with StableBit DrivePool, HTPC with Windows 10, etc. I've decided to use Hyper-V for virtualization, because it's free, because it is relatively easy to manage from my laptop with Windows, and because I'm quite experienced with managing Windows.

And when I've tried to install Windows, I ran into another problem. In most of the cases, for most of the ways of creating the bootable stick, the motherboard just won't boot from the USB stick. I only managed to boot Windows Server installer by first creating bootable Windows 10 stick with the official Media Creation Tool, and then unpacking Windows Server ISO to it, overwriting all Windows 10 WIMs with Windows Server ones. I did not manage to boot Ubuntu 17.10 live environment at all. One thing I've discovered is that if your USB stick is not formatted with FAT32, then PC won't boot from it and EFI shell won't see it and you won't be able to mount it manually; NTFS is not supported. You have to format it with FAT32.

So I've managed to install Windows after all; I've had to install network drivers from SuperMicro website, because Windows Server 1709 does not support these network adapters out-of-box. Installed Hyper-V, installed Bitlocker, everything went smooth and fine, except that SuperMicro saved some cents by selling $696 motherboard without TPM (while even $50 tablets come with TPM); you have to purchase TPM module separately for $40 or to live without it. Right now I'm storing my encryption keys on Sandisk Cruzer Fit USB stick, but it's not quite secure.

And then I've started to configure virtual machines. Now, that was a pain, because it turned out that SR-IOV is really, really limited on this motherboard.

I was going to pass the entire SATA controller through to NAS guest (so that it'll have access to all the controller features, SMART statuses, will recognize all new HDDs connected to the system etc)... but the only thing I've got was a cryptic message along the lines "SATA controller is not a PCI Express device" (despite being connected to PCI-E root port and having LocationPath characteristic to PCI-E device). I had to do the good old disk passthrough instead. So no full StableBit Scanner support for me. And whenever I'll need to insert another HDD into hot-swap HDD cage, I'll have to connect to the host system, switch that HDD to offline mode, and add it to my NAS VM manually.

Then, I was going to pass all network controllers through to their respective pfSense instances. Well, this worked, except that pfSense does not recognize these NICs because there are no drivers. Had to use network virtualization instead (and to disable all network protocols on these NICs in host system).

NVidia GPU should obviously be used in HTPC guest system, and passing it through was not the problem. The problem was making it to work there. It turns out that NVidia (as opposed to AMD?) explicitly prohibits people to use their regular (non-Tesla) GPUs in virtual environments; as a result, my GT710 was marked with an exclamation mark in Device Manager ("The device cannot start", error code 42). There is a patch for NVidia drivers which disables their check for virtual environments; however, it (obviously) produces unsigned drivers, so I had to switch Windows 10 to a special "test mode" which allows for unsigned drivers. And still Windows constantly tried to overwrite these drivers with ones it obtained from Windows Update; Hide-WindowsUpdate powershell script did not hide these drivers for some reason, so in the end I've just created a directory in place of that wrong .inf-file and removed all permissions from it.

Another problem with GPU is that Hyper-V GPU is always selected as primary. So, unless you reconfigure things after every reboot or any minor change in the environment, you'll have Start menu on the virtual screen, and all apps will open on the virtual screen by default. I've disabled Hyper-V GPU in the guest system, but then that default display just reappeared as "basic display", without even being visible in Device Manager. It turns out that Hyper-V falls back to the standard VGA adapter when host does not support Hyper-V GPU. To finally solve the problem, I've disabled `BasicDisplay` service through registry. Now my VM only uses NVidia GPU, and I can only vmconnect to it in Enhanced Session (which uses some version of RDP internally).



Then, USB, the most painful. I was going to pass the entire USB controller to the HTPC guest system, so that for everyone the system will work just like the ordinary HTPC box with Windows 10. Meaning that keyboard/mouse will work in the guest system; USB sticks connected to the system will be visible in the guest system; etc. Windows offers some protections against shooting yourself in the leg there, not allowing user to disable USB controller with HID devices connected to it; and there is only one USB controller, and IPMI virtual keyboard/mouse are connected to it (instead of IPMI having its own USB controller). These protections are easily worked around by disabling relevant USB3 services. However, for some reason, SuperMicro also prevents you from detaching USB controller:

PS C:\Users\root> Dismount-VMHostAssignableDevice -LocationPath "PCIROOT(0)#PCI(1500)" -Force

Dismount-VMHostAssignableDevice : The operation failed.

The device cannot be assigned to a virtual machine as the firmware (BIOS or UEFI) on this host computer system indicates that the device must remain in contact with the firmware running in the host. The device can only be used in the management operating system. You should contact your OEM to determine if a firmware upgrade is available, or if the PCI Express device can be reconfigured to be independent of the host firmware.

Supermicro did not respond to my messages on that matter.

I was originally hoping that they will respond and will provide some BIOS update to fix this, so I've installed a trial version of USB-over-Ethernet software to at least pass the physical keyboard and mouse to the guest HTPC system. However, full version of that software goes for ~$1000, and it creates a huge security hole in a form of network connection between the root management system and HTPC one with games and other untrusted software; and it seems that Supermicro won't respond after all, so I'm at lost there.

So, to summarize: you cannot pass USB controller or SATA controller to your VMs; configuring NVidia GPUs for VMs is PITA; and NICs aren't supported in some major *nix OSes.

Performance

With CPU cooler disconnected, with system in "silent mode", CPU temperature is at ~53 degrees; system is in the room (not the cabinet)

11 hours of running with all coolers disconnected; system is in the room (not in the cabinet)

Sensor readings for the previous screenshot (that is, 11 hours under full load without any active cooling)

Geekbench results for Windows 10 VM with 14 cores: https://browser.geekbench.com/v4/cpu/6576115 (poor multi-core results were probably caused by limited memory bandwidth, as I only have single RAM stick now, working in a single-channel mode).

Conclusion

Right now, this build serves as HTPC, NAS, router, and web server. All four LAN ports are put to use (one is connected to the internet provider; another, to WiFi access point; third is used to manage the root system; fourth is used to manage client systems such as pfSense).

Unsolved problems / things to do:

1) What to do with USB pass-through? I need to pass at least keyboard and mouse (and preferably all USB devices) to HTPC (Windows 10), and the current temporary workaround with USB-over-Ethernet is clearly unsustainable; it breaks hypervisor isolation, and, once the trial for USB-over-Ethernet program ends, it will cost a lot of money to purchase it (around $1000 IIRC).

2) It would be nice to pass SATA controller through to NAS VM and to set up StableBit Scanner there.

3) 16GB RAM is really not enough. As soon as RAM prices will become somewhat reasonable again, I'm going to buy additional 16GB stick (enabling dual-channel as well), which will allow me to run heavier applications in HTPC and to set up some additional services such as TOR relay / mining / etc.

4) GPU should be replaced with some power-efficient Radeon with 4k/60fps support. Right now low-end RX550 has a TDP of 50W and costs $100. I hope that there will be lower-end cards in the next generation.

Usually the door is closed. I've opened it for this photo

Close-up

In closed cabinet, under light load

In closed cabinet, an hour into full CPU load



