Here are the Technical Specifications for the tested 32GB module.

Intel® Optane™ Memory Series 32GB M.2 80mm Performance

Sequential Read (up to) 1350 MB/s

Sequential Write (up to) 290 MB/s

Power - Active 3.5 Watts

Power - Idle 1 Watt

Endurance Rating (Lifetime Writes) 182.5 TB

Enhanced Power Loss Data Protection No

This is essentially the speeds I was seeing in the tests under ESXi 6.5.0d, as described below. Note that these particular consumer M.2 modules are unlikely to be on the VMware HCL/VCG.

Intel Optane is available now at:

Amazon in 16GB and 32GB sizes.

in 16GB and 32GB sizes. B&H in 16GB and 32GB sizes.

in 16GB and 32GB sizes. Newegg in 16GB and 32GB sizes.

Newegg - Intel Optane M.2 2280-S3-B-M 32GB PCIe 3.0 x2 with NVMe Memory Module MEMPEK1W032GAXT

How Intel Optane / Micron QuantX / 3D XPoint actually works

There has been months of build-up surrounding the Optane product launch, with a consumer-friendly use case that I outlined here:

XPoint Storage Accelerator arrives in Kaby Lake ThinkPads in early 2017 as a really fast 16GB NVMe cache for its Intel RST RAID volume

Dec 29 2016

essentially intended to speeding up your C: drive in Windows for all frequently accessed data and programs.

Later on, the PCIe Optane cards clearly intended for the datacenter emerged, with 3 different use cases. They'll likely soon be followed up with a more general purpose consumer version. This PCIe form factor is not yet widely-available for testing, initially announced in capacities up to 800GB. All Optane products feature this very new 3D XPoint based storage/memory. See also:

"Introducing 3D XPoint™ Technology" by Micron.

This weekend, it was finally time time to tinker with my just-arrived Intel Optane M.2 drive as plain old storage. Well, not just any storage, but presumably very fast NVMe storage. Just how fast was the first question.

I should be very clear here, Optane M.2 modules only come in 16GB and 32GB sizes right now. Given this tiny size, it's even clearer that they're only intended to accelerate (cache) slower mechanical drives very new Optane ready laptops and desktops. Those tests have shown impressive results for accelerating boot times and applications. But these tests weren't comparing performance of Optane to drives like the new Samsung 960 EVO or PRO M.2 NVMe SSDs, when used as a normal NVMe storage device. I was curious. How about I compare the performance with my 1TB 960 EVO drive.

Given Intel's consumer focus here, I knew full well that I was highly unlikely to find drivers or firmware for VMware ESXi. Heck, these M.2 modules are so new that I wasn't even sure a new firmware for them would be available, and whether those update tools would be Windows-only.

All that aside, it didn't take me long to spot the driver VIB for ESXi 6.5 over at vmware.com, which could speed things up. This had happened before, greatly accelerating NVMe speeds when I tested the Intel 750 Series under ESXi 6.0, way back in 2015. I know, long shot.

I just wanted to get my hands on a relatively affordable bit of 3D XPoint that is shipping now for under $100. Basic testing would hopefully help me get a taste of drive speeds. Having great luck with a variety of M.2 NVMe devices, install was expected to go smoothly on my SuperServer's PCIe slot using any adapter card, or right into my motherboard's M.2 slot, since both options benefit from full PCIe 3.0 x 4 speeds.

Turns out I got the same low performance numbers with this Optane when using a PCIe adapter, or when installed on the motherboard. These results were considerably lower than my many positive experiences with my Samsung 950 PRO, 960 EVO, and 960 PRO M.2 NVMe drives, also formatted as VMFS. Especially slow were the write speeds. Darn. Was hoping 3D XPoint would be able to shine even in such an affordable and tiny capacity, but my hopes are so far unrealized, at least with the drivers I tested so far under ESXi. There are so many potential reasons for this, but it's highly unlikely that either VMware or Intel are particularly interested in optimizing their drivers for this unintended use case.

Samsung 960 EVO at left, Intel Optane M.2 at right. When using either the VMware NVMe driver or the Intel NVMe driver, speed were essentially the same for Optane.

For my first tests seen in the video below, you'll see the 32GB drive was installed in my Supermicro SuperServer Xeon D-1541 Bundle 2 system that's at IPMI 3.52 and BIOS 1.2 set to my Recommended BIOS settings, running VMware ESXi easily updated to 6.5.0d, with Windows 10 Creators Update running ATTO Disk Benchmark 2.47.

Here's an outline of what you'll see me do, in the video below, recorded live as I went, with voice-over:

format 100% of the available 32GB of space as VMFS 6.81

used vSphere Client (HTML5) to deploy two Windows 10 VMs from my "Golden Master" Template that uses VMware Paravirtual instead of the default LSI Logic SAS, with essentially the same speeds for both SCSI types here, placing one VM on the Samsung 960 EVO, the other on the Intel Optane 32GB M.2

booted both VMs

ran ATTO Disk Benchmark on both VMs on the same host concurrently

ran ATTO Disk Benchmark on each VM sequentially, disappointing Optane speeds

added Intel NVMe VIB, rebooted

ran ATTO Disk Benchmark on each VM sequentially, same disappointing Optane speeds

deactivated VMware NVMe VIB, rebooted

ran ATTO Disk Benchmark on the Optane VM, the 960 EVO VM vanished due to missing VMware NVMe driver, same disappointing Optane speeds

Remember, these drives are meant mostly for fast read speeds, for cache. This article is just a first look, not the end of the story, and tinkering will continue. I should also test these Optane M.2 SSDs under Windows 10 formatted as NTFS, just to see what kind of impact virtualization is having. I'd also like to test with VMware vSAN 6.6, even though it's completely unsupported. Backup first, and see what happens. But I don't have 4 hosts to really do vSAN right though, only 2. So I'll have to get a bit creative, that's a work in progress.

Remember, these little 16GB and 32GB M.2 modules aren't the same as the 800GB DC P4800X, a much faster device that proclaimed full vSAN support on day 0:

vSAN Got a 2.5x Performance Increase: Thank You Intel Optane!

Mar 19 2017 by Michael Haag For vSAN, the results demonstrate that the Intel Optane NVMe SSDs provide an extremely high-performance caching device for write-intensive workloads. Customers can see immediate benefits for applications like VDI persistent storage use cases and next-generation applications such as Big Data, video streaming and real time streaming analytics.** An All-Flash vSAN system comprised of Optane-based NVMe cache devices delivers a very scalable and performant HCI solution for next-gen apps in the modern data center.

Eventually getting my hands on a P4800X at some later date would certainly be interesting!

Wrapping up with a set of NVMe relate esxcli commands that you might find helpful. Many of these were used in the video below, inspired by Anthony Spiteri's similar work with replacing AHCI drivers.

first, use WinSCP to push the intel-nvme-1.2.1.15-1OEM.650.0.0.4598673.x86_64.vib file from within the downloaded zip file to the ESXi host's /tmp folder second, install the VIB using esxcli from an SSH session to the host, logged in as root esxcli software vib install -v /tmp/intel-nvme-1.2.1.15-1OEM.650.0.0.4598673.x86_64.vib reboot

How to install the Intel NVMe Optane driver

esxcli system module set --enabled=false --module="nvme"

esxcli software vib remove --vibname=intel-nvme

Intel NVMe VIB just removed, host needs to be rebooted.

How to verify which NVMe driver(s) is/are installed

esxcli software vib list | grep nvme

How to see also active drivers

esxcli system module list | more

How to make changes active

reboot

How to re-enable the VMware NVMe driver

esxcli system module set --enabled=true --module="nvme" reboot

Summary of the two driver VIBs I worked with

Here were looking at the relevant subset of active drivers that are discussed in the video, when both were set to active. I learned that the first driver listed by VMware is named nvme and is used for the Samsung 960 EVO SSD, and the second driver by Intel named intel-nvme is used by the Intel Optane M.2 SSD.

esxcli system module list | more Name Is Loaded Is Enabled ----------------------------- --------- ---------- nvme true true intel-nvme true true

Intel Optane M.2 consumer SSDs for caching might not work well for tiny VMware VMFS datastores

Intel Optane Memory Module 32 GB PCIe M.2 80mm MEMPEK1W032GAXT at Amazon, 32GB.

Intel Optane is available now at:

Amazon in 16GB and 32GB sizes.

in 16GB and 32GB sizes. B&H in 16GB and 32GB sizes.

in 16GB and 32GB sizes. Newegg in 16GB and 32GB sizes.

Here are the Technical Specifications for the tested 32GB module.

Intel® Optane™ Memory Series 32GB M.2 80mm Performance

Sequential Read (up to) 1350 MB/s

Sequential Write (up to) 290 MB/s

Power - Active 3.5 Watts

Power - Idle 1 Watt

Endurance Rating (Lifetime Writes) 182.5 TB

Enhanced Power Loss Data Protection No

This is essentially the speeds I was seeing in the tests above, with write speeds actually slower than what SATA3 2.5" SSDs can do. Also notice the lack of Enhanced Power Loss Data Protection, so another reason these are not recommended for use for caching for vSAN. That is quite an endurance rating compared to any consumer Samsung M.2 NVMe SSD.

Given I'm already getting near native speed, the priority of retesting with NTFS on this M.2 Optane module is not a priority for me personally, with other folks already benchmarking them, here at TweakTown for example.

See Evan's feedback in the comments below. I've also now measured the VM at various queue depths, from the lowest ATTO allows of 2, to the highest of 10. Admittedly, I have not directly measured latency. This article continues to be a work in progress, as I determine whether I keep this drive or not.

ATTO Disk Benchmark 2.47 at Queue Depths of 2, 3, 4, 6, and 10. Click twice to zoom all the way in, the slide the image left to right.

Saving your money for the DC P4800X, or the rumored P900 consumer version still seems to be a good strategy, since these capacities are still rather small, even in a RAID0, where the performance at low Queue Depths does become more intriguing:

Intel Optane in RAID 0 - World's Fastest System Disk

Jun 28 2017 by Jon Coulter at TweakTown As we explained and demonstrated throughout the entirety of this review, 4K random read at QD1-2 has the largest influence on system performance in an OS environment. This is exactly where Optane has changed the game forever. Optane Memory delivers random read performance that is between three and ten times better than the best flash has to offer.

Fantastic work by Florian Grehl! All kinds of details, what an analysis, simply incredible!

Using the first 3D Xpoint based Intel Optane SSD with ESXi

Aug 31 2017 by Florian Grehl With 32GB, it doesn't make sense to buy them for anything else than their intended use case: Cache device to enhance SSD/HDD Performance. If you want to use Optane technology as VM Datastore, wait a couple of months when devices with a higher capacity are available.

All Optane mentions at TinkerTry, dating all the way back to Feb 26 2016!

How 3D XPoint Phase-Change Memory Works

Jun 02 2017 by Allyn Malventano at PC Perspective