Analysis The SSD format is wrong for flash memory storage arrays. That is the message from DSSD, EMC’s rack-scale, shared flash array development.

The Register has seen pictures of DSSD’s flash modules and these are not disk bay form factor SSDs. Rather, they resemble Violin Memory VIMM – Violin In-Line Memory Modules – instead.

Pure Storage and others have dissed Violin for not using commodity SSDs and thus falling behind the flash storage technology development curve.

However, our understanding is that Pure Storage* is also developing its own flash-carrying modules and stepping away from the SSD format, as has DSSD.

Developing proprietary flash cards can lead to lower power budgets, less heat generation, and faster data access times compared to standard SSD or PCIe form factors.

We understand, from talking to sources, that the DSSD product is a 5U-size enclosure, containing 36 flash modules which are full or half-populated modules and 2TB or 4TB in capacity, with 16TB coming. They have dual PCIe gen 3 X4 ports – X4 meaning four lanes – and a dual-controller PCIe hot plug capability.

Total capacity is 144TB with a planned doubling to 288TB.

All active DSSD components are hot-swappable and the system is highly available. All core modules are replaceable by customers and there are 96 (PCIe) gen 3 X4 client ports.

Here are the pictures we have seen:

DSSD enclosure frontal view showing 36 front-loading flash modules

Inside the box there are flash modules;

DSSD flash module in case

When the case is removed we see;

DSSD flash module with casing removed. Note the diamond-shaped structure to the right of the flash chips. There is an orange pull-tag on the left. Source VirtualGeek.

This card is double-sided with flash chips on both sides. The diamond-shaped structure provides IO connectivity to the flash chips and, because of its orientation, can provide parallel IO to these chips.

DSSD controller board

The flash modules do not have their own controllers. They are mounted, so to speak, on DSSD’s own controller board with a CPU carrying out wear-levelling and flash media management. This board has 12 PCIe switch chips, meaning 64 lanes in total. Each PCIe switch board has a 48-connection port and the boards are duplicated for redundancy. So 48 servers can be connected, each with a PCIe gen 3 X4 connection running at 4Gbit/s one way.

We have 48 servers connecting to 144TB of shared flash storage across a PCIe fabric (think NVMe fabric) and access data at near-memory speed in their memory address space, not accessing theta by traversing the host OS' IO code stack. It's server flash mounted outside the servers.

When EMC bought DSSD we understood it had three strategic investors; Intel, SAP and Toshiba. The product design reflects a disaggregation of the traditional server design and a conversion of SAN storage to shared DAS.

The traditional server has a CPU, some memory and some direct-attached storage (DAS) and it connects across a FC/Ethernet network to a shared storage area network (SAN) or filer.

What is being envisaged is fast PCIe fabric-linked, core shared flash storage resource, conceptually surrounded by a ring of compute engines, each with DRAM and, say, CrossPoint memory. The core DSSD array will stream old data off to a slower bulk capacity disk drive array and/or to the public cloud.

In this sense, the DSSD design is a revolutionary approach which, as well as rewriting the rules for flash arrays, will cause servers to be redesigned as well. ®

Storagenote

* Pure Storage has been asked about developing its own proprietary flash module design.