Intel NUCs with ESXi are being used as home servers and in many home labs. If you are generally interested in running ESXi on Intel NUCs, read this post first. One major drawback is that they only have a single network port. There are USB NICs in the market, but for ESXi hosts they only work in path through mode. That means that USB NICs can only be used inside VMs and not for the hypervisor itself as vmnic.

The slightly older 4th Gen NUCs had a Mini PCIe slot that allowed an additional NIC to be installed. With that port it was possible to install a Syba Mini PCIe NIC for example. Nevertheless the adapter is unsupported with ESXi and did not fit into the NUC chassis, there are solutions.

Unfortunately, the 5th Gen NUC does no longer have a Mini PCIe slot. Instead it has M.2 slots. An easy solution would be a M.2 NIC, but until today there are no such cards available. In this post I will explain the possibilities to use PCIe cards with the M.2 slot to upgrade the 5th Gen NUC with additional NICs or other cards like Fibre Channel HBAs.

More Information about the 5th Gen NUC M.2 Slot

M.2 is also known as Next Generation Form Factor (NGFF). It is a specification for internally mounted computer expansion cards. It is the successor to the mSATA standard, which uses the PCIe Mini Card physical card layout and connectors. M.2 is a very flexible standard that allows different module sizes and various interfaces. As M.2 cards are available in many possible variations, they are divided into different Form Factors and Keys.

Form Factors - M.2 devices are denoted using a WWLL naming schemes, where "WW" specifies the modul width and "LL" specifies the module length. You can find notation like "M.2 2280 Module" in the NUC documentation.

Keys - M.2 modules provide a 75 pin connector. Depending on the type of module, certain pin positions are removed to present one or more keying notches. Host-side M.2 connectors may populate one or more mating key positions, determining the type of modules accepted by the host. There are 12 different keys specified (Key A-M).

Not all 5th Gen Intel NUCs have the same M.2 slot but the slot I am mainly talking about in this post is available on all NUCs. It's the slot where you add the M.2 SSD. Only one NUC, the NUC5i5MYHE provides a second M.2 slot (Which provides a differnt key). Instead of the second M.2 slot the other NUCs have a presoldered WiFi module.

There are two M.2 Slots available in 5th Gen NUCs:

M.2 Key B (2280) - All 5th Gen NUCs M.2 Key E (2230) - Only NUC5i5MYHE

The Key E slot is used for a WiFi adapter. The Key B ports is typcially used for a M.2 SSD. But according to the specification, they should theoretically provide the following interfaces:

Key B: PCIe ×2, SATA, USB, Audio, UIM, HSIC, SSIC, I2C and SMBus

Key E: PCIe ×2, USB, I2C, SDIO, UART and PCM

M.2 to PCIe Adapter

While knowing that PCIe is compatible to M.2, the only thing you need is an adapter, right? Simple, but these kind of adapters appear to be very uncommon. I could only find these two adapters from a company called Bplus:

The P14S-P14FP adapter has a Key B interfaces which is available on all 5th gen NUCs. It converts the M.2 slot to a PCIe X4 slot where you can insert your PCIe network adapter.

The P15S-P15F adapter has a Key E interface which is the second M.2 slot on the NUC5i5MYHE. It converts the M.2 slot to an Mini PCIe slot. (As a side note, I didn't manage to get the P15S-P15F to work in my NUC but I'm currently trying to find out why.)

M.2 and PCIe Voltage Issue

My fist attempt to install the P14S-P14FP adapter was unsuccessful. The problem is that M.2 only provides 3.3V and PCIe Cards require both, 3.3V and 12V. To solve that issue, the P14FB provides a 4pin FDD power jack and a 15pin SATA power jack. Unfortunately, the NUC neither has one of that connectors, nor it has any internal 12V headers. The only solution is to use an external power adapter.

Any 12V power adapter should fit. I am using one from a portable hard drive. I connected it to the 4pin FDD power jack by using a "5.5 mm Female DC Power Connector" and a "2 Pin Male to Female Jumper Wire".

Make sure that your know the polarity. It is common, that + is on the inside, but some adapters might differ. The power adapters polarity should be written on the label. In doubt, use a multimeter.

P14S-P14FP Installation

The adapter (or Extender Board) consists of 2 PCBs that are connected with a flexible flat cable (FFC). You should be aware that it comes without any cases, so you probably have to build your own. Without further modification, the setup is very fragile and looks like that:



Required Components

Of course, this is in no way a supported configuration. It's only for engineering purposes. I can't guarantee that it will work, or that it don't break your components.

Install the P14S into the NUCs Key B M.2 Slot. It's a 2280 card that fits perfectly into the NUC.

Carefully lift the black bracket to an angle of 90 degrees, slide the FCC into the connector (blue side upwards), and close the bracket. The bracket should slightly squeeze the cable, but it should not bend.

Get the cable out of the NUC. If you have the NUC5i5MYHE, you can remove the serial port bezel and put the cable through. Connect the other side of the FCC to the P14FB PCB.

Connect the Dupont 2PIN Cable to both PCBs. Make sure to connect the red wire to the marked pin on both sides.

Connect an external 12V power adapter to the 4pin FDD connector

Insert a PCIe card to the P14FB Plug in the 12V adapter (You should see a green light indicating 12V power) Power on the NUC

Always plug-in the 12V adapter first.

For the first test I am using a network adapter with an Intel 82576 chipset. This is fully supported with ESXi so I do not have any driver issues. If your ESXi does not detect the card you should verify that the NUC has detected it in the BIOS (Devices > PCI). If the card as been detected, you probably have a driver issue in your ESXi.



My NUC currently runs with VMware ESXi 6.0.0 build-2494585 (Setup Howto). The card has been detected without any further modification.

lspci outout:

[root@esx6:~] lspci |grep vmnic 0000:00:19.0 Network controller: Intel Corporation Ethernet Connection (3) I218-LM [vmnic0] 0000:04:00.0 Network controller: Intel Corporation 82576 Gigabit Network Connection [vmnic1] 0000:04:00.1 Network controller: Intel Corporation 82576 Gigabit Network Connection [vmnic2] 0000:05:00.0 Network controller: Intel Corporation 82576 Gigabit Network Connection [vmnic3] 0000:05:00.1 Network controller: Intel Corporation 82576 Gigabit Network Connection [vmnic4]

esxcli network nic list output:

[root@esx6:~] esxcli network nic list Name PCI Device Driver Admin Status Link Status Speed Duplex MAC Address MTU Description ------ ------------ ------ ------------ ----------- ----- ------ ----------------- ---- -------------------------------------------------- vmnic0 0000:00:19.0 e1000e Up Up 100 Full b8:ae:ed:75:08:68 1500 Intel Corporation Ethernet Connection (3) I218-LM vmnic1 0000:04:00.0 igb Up Down 0 Half 00:1b:21:93:b3:b0 1500 Intel Corporation 82576 Gigabit Network Connection vmnic2 0000:04:00.1 igb Up Up 1000 Full 00:1b:21:93:b3:b1 1500 Intel Corporation 82576 Gigabit Network Connection vmnic3 0000:05:00.0 igb Up Down 0 Half 00:1b:21:93:b3:b2 1500 Intel Corporation 82576 Gigabit Network Connection vmnic4 0000:05:00.1 igb Up Down 0 Half 00:1b:21:93:b3:b3 1500 Intel Corporation 82576 Gigabit Network Connection

vSphere Client Network Adapters:



Performance [Update: September 29, 2015)

Some words about the performance. The NUCs M.2 slot is based on PCI Express (PCIe) Revision Gen2 and has 2 lanes (X2). Each lane supports a data transfer speed of 5.0 GT/s (Gigatransfers per second). To get the actual usable bandwidth, you have to take into account that PCIe Gen 2 uses a 8b10b encoding which means that it requires 10bits, to transfer 1 byte (8 bits).

5.0 GT/s * 2 (lanes) * 8/10 (encoding) = 8Gbit/s = 1000MB/s

=

The maximum bandwidth of the 5th Gen NUCs M.2 slot is 1000MB/s

According to the P14S-P14FP Extender Board documentation, it supports "PCI Express base Specification 1.1 (Up to 2.5Gpbs)". I'm not sure where the "2.5Gpbs" comes from, maybe it's a mistake. PCIe Gen 1.1 supports 2.5 GT/s per lane and the card supports two lanes. (It's an X4 slot because X2 slots does not exist)

2.5 GT/s * 2 (lanes) * 8/10 (encoding) = 4Gbit/s = 500MB/s

=

The maximum bandwidth of the P14S-P14FP adapter is 500MB/s