Update: I’ve added extra links at the end of this post, covering some background reading/watching from SNIA.

According to a recent article on The Register, Diablo Technologies appears to have closed down. Diablo was or is a manufacturer of NVDIMM and DRAM extension technology, including Memory1. As previously reported, many of the original founders and senior execs had already left the company over the past 12 months. Is this just another company failure, or does it signal issues with the adoption of NVDIMM and SCM technology?

NVDIMM

NVDIMM or non-volatile DIMM is a technology that uses the DRAM DIMM form factor and is directly plug compatible with system memory. As the name suggests, the contents of the DIMM are not lost when the power is turned off. Diablo calls this technology Memory Channel Storage, with the rest of the industry generally knowing as Storage Class Memory or SCM. If we compare traditional (for example flash) storage to DRAM, one big difference is the way in which storage is addressed. DRAM is byte-addressable, whereas flash is block-addressable. Flash requires an entire block of data to be modified for an update – DRAM can do this at the byte level.

So now we have two characteristics – addressability and volatility. The third consideration here is connectivity. DRAM is connected to the processor by what used to separately called the northbridge or memory controller hub (today integrated into the processor). Storage is connected to the southbridge, typically as SATA or SAS (let’s exclude NVMe for a moment). Obviously being closer to the processor results in lower latency for the application, with persistence having extra benefits in how applications commit data to external storage. Being able to write to persistent DRAM alleviates some of the performance hits of writing to external storage. Diablo licensed their NVDIMM technology to other companies like SanDisk, who produced the ULLtraDIMM products.

Memory1

Diablo’s Memory1 product is slightly different. The solution uses the same technology, but doesn’t offer persistence. Instead, Memory1 claims to be cheaper than DRAM and allow much greater “pseudo-DRAM” to be deployed per server than could with traditional DRAM alone. So what’s the point, you may say. Well, there are a few potential benefits in using Memory1, as outlined by Maher Amer, CTO for Diablo at a Tech Field Day presentation in 2016. Maher shows how the latency introduced by QPI on multi-socket NUMA-based processor architectures can slow a system down to a level that is worse than using Memory1. QPI, is Intel’s QuickPath Interconnect – technology that in multi-core systems enables one processor to access the resources (like DRAM) of another.

It’s worth watching the TFD videos in more detail, however, the benefit Diablo claims for Memory1 is a significant improvement in performance for not having to traverse QPI (because more memory can be assigned to each processor). There is also a potential benefit in licensing savings with systems that have to go quad-core, simply to get more addressable memory. Using Memory1, dual-core systems could be used, saving money on per-socket licensing costs.

Disadvantages

All this sounds good, if you have the application architecture to exploit it. Memory1 wouldn’t be a useful replacement for traditional DRAM in, say, a hyper-converged or traditional virtualisation cluster. Similarly, NVDIMM provides persistence, but needs programming and hardware changes. Servers need to support NVDIMM at the BIOS level. The operating system needs to be aware of what DRAM is persistent and what isn’t. Third, the application needs a similar view of the non-volatile DRAM in order to exploit it effectively.

NVMe

Compare the benefits and disadvantages of NVDIMM/Memory1 to NVMe. NVMe or Non-Volatile Memory Express, sits on the PCIe bus (it’s not strictly a bus but a point-to-point connection). PCIe is connected on the northbridge, so is closer to the processor. Because it’s not a bus, PCIe doesn’t have the issues of a shared architecture and so can scale better. Performance improves with each generation of PCIe and by aggregating channels or lanes together, a single device can deliver more throughput. So flash devices that use NVMe can deliver high performance and low latency persistent storage. Intel use NVMe for Optane drives, which have latencies as low as 10µs. Standard flash-based NVMe drives can deliver performance in line with technologies like Memory1. Don’t forget, many NVMe drives (in 2.5″ drive form factor) are also hot swappable. It’s certainly not possible to hot-swap DIMMs. NVMe flash drives also offer high capacity, compared to what was offered with Memory1.

The Architect’s View

So what conclusions can we draw? It appears that NVMe can be implemented more easily than NVDIMM technology, delivering similar performance improvements. NVMe drive support is already widespread in existing operating systems. With Optane, we only have PCIe card and SSD form factors. A DIMM form factor has only recently been announced, with products not expected to launch until 2H2018. Diablo perhaps didn’t gain traction for their technology because of the niche use cases, whereas NVMe has more general applicability.

Does this mean we’ve seen the end of NVDIMM as a technology? I doubt it will simply die off, after all, Intel will surely be making a big push with Optane DIMMs. But the adoption is harder to achieve and the use cases more limited. Probably the biggest issue to overcome is how to program for NVDIMM. The O/S needs to know which memory pages are persistent and non-persistent. Somehow the application needs to know how to take advantage of this. Contrast this with the use of NVMe to store consistency or checkpoints for in-memory databases (for example) and NVDIMM doesn’t appear to give a significant advantage.

What do you think? Have you seen a different scenario playing out with NVDIMM? Do you think the NVDIMM market just hasn’t got started yet? I’d be interested in hearing the view of the wider community, especially if you have a vested interest in what NVDIMM could offer. SNIA has also asked me to highlight their Persistent Memory Summit, coming up in 2018 (link).

Further Reading

Comments are always welcome; please read our Comments Policy first. If you have any related links of interest, please feel free to add them as a comment for consideration.

Copyright (c) 2009-2019 – Post #1891 – Chris M Evans, first published on https://www.architecting.it/blog, do not reproduce without permission.