For as long as most people can remember, the go-to media for storing data persistently has been the humble hard drive. Originally invented by IBM and released in 1956, the hard disk drive is about to celebrate 60 years of (almost) faithful service in delivering our data. It’s reasonable to assume that hard drives will be with us for some time yet.

However, there's no doubt that the rapid rise of solid-state disks, which use NAND flash, is a distinct threat to the future of hard drives. The emergence of 3D NAND likely will hasten the decline of HDDs. Let's look at the evolution of hard drives and NAND flash storage to see where the storage industry is headed.

HDDs: Performance lags capacity

Looking at the growth curve for capacity, HDDs have consistently increased in capability with a logarithmic growth, currently topping out at 10 TB. Unfortunately, capacity is not the only metric used when measuring storage capability; performance is as just as important -- if not even more so -- to modern IT systems.

Hard drive performance hasn’t managed to keep pace with the capacity trends. In terms of rotational speed, HDDs have not gone past 15K RPM, although Western Digital was looking at 20K drives around 2008 as a way to combat the rise of SSDs (more on that later). Although we’ve seen modest improvements, HDDs still offer similar performance characteristics to those of 20 years ago, and if we measure the I/O density (IOPS capability divided by capacity), we see a significant trend downwards in this metric -- the reverse of what is required in the market today.

As if these performance problems weren’t enough, another issue impacting the ability of drives to keep pace with demand is the way in which data is recorded on the disk platters inside the HDD. The industry has gained recent increases in capacity by using a technique known shingled magnetic recording. SMR makes the physical tracks on the HDD narrower by overlapping concentric tracks and allowing more tracks to be stored on each platter. The downside to overlapping the tracks is that data can no longer be written to an individual track. Instead an entire area of adjacent tracks has to be read and re-written, significantly impacting write performance, with throughput on writes at around half that of reads.

Flash evolution

So the HDD market is moving towards being a capacity-only medium. There’s talk of vendors phasing out 10K drives, and 15K RPM drives aren’t increasing in capacity. Part of this market shift has been the growing popularity of solid-state disks. SSDs use NAND flash, a type of non-volatile storage similar to DRAM that retains its contents when the power is removed. Solid state drives have performance characteristics many times better than hard drives, even with totally random workloads.

SSDs were originally based on a technology called single-level cell. SLC stores one bit of information, either a “0” or “1” by recording a charge within a cell that is a combination of a single transistor and “floating gate.” A low voltage reading across the transistor means “0” and a high voltage means “1,”

SLC was quickly superseded by multi-level cell (MLC), which used multiple charge states and voltage levels to store four different values representing either 00, 01, 10 or 11 in binary. As the voltage tolerances of MLC are closer, the endurance (a measure of the lifetime of the memory) of the NAND is lower than that of SLC (writes to flash are gradually destructive). Through better NAND fabrication and a range of management techniques (like wear levelling), the endurance of MLC has been increased to make it viable for enterprise use, a version of MLC commonly referred to enterprise MLC. MLC is much cheaper than SLC because of the increased capacity capability achieved without increasing the number of cells.

The next step in flash evolution is triple-level cell, which is not as its name suggests, storing three levels, but three bits of data across eight voltage levels (000 through 111). Again, with the reduced voltage tolerances between states, endurance for TLC has been lower than MLC, but still good enough to develop enterprise-class products that further reduce the cost per TB.

Figure 1:

MLC and TLC NAND have been enhanced in terms of capacity through the use of 3D technology. Traditional NAND (now referred to as planar) stores data in a 2D arrangement on silicon. 3D NAND, as the name implies, creates a 3D structure by etching down into the silicon to produce multiple layers of transistors and gates. Current products on the market support 48 layers, with Samsung predicting hundreds of layers for its 3D V-NAND technology (pictured above) and capacities of 1 Tb per chip (4x today’s products) by 2017.

Meeting enterprise storage requirements

All of the above technology talk sounds great, but how does this translate to storage in the enterprise? Storing data is typically based on three metrics: capacity, performance and cost. Demands to store more data have increased capacity requirements year over year, while speedier processors and memory have increased the I/O density past the point where HDDs alone can keep up with the needs of the application.

Vendors have historically addressed the problem through tiering, hybrid solutions, and now all-flash arrays. The introduction of 3D TLC NAND, in conjunction with data reduction technologies like compression and deduplication, has reduced cost of all-flash systems using this technology to a point that they are comparable with high performance (15K) hard drive systems when looking at total cost of ownership.

At the same time, vendors have realized that end users aren’t using the level of performance that SLC and MLC offer, meaning for most use cases TLC easily has the performance and endurance to deliver to general enterprise workloads. Add to this the capacity increases achieved with 3D technology and we can see where the market is heading.

TLC and 3D NAND products are in use today by Dell, SolidFire, Kaminario, Pure Storage and HPE 3PAR, with typical prices ranging between $1-$2/GB usable. While this is more expensive than HDD-based systems on pure capital costs, a TCO that encompasses space, power, cooling and the need to no longer overprovision, means all-flash can be a genuine platform for all workloads. While high-capacity hard disk drives may be around for some time, the end of 10K and 15K RPM drives in the enterprise has definitely arrived.