Faster and dramatically more power-efficient than rotating magnetic media, solid-state disks (SSDs) are one of the longest-awaited and most eagerly anticipated technologies in the past two decades of computing. The theoretical underpinnings of mass storage with no moving parts have been with us for decades, but the improvements that have put solid-state in economic and technological reach of ever larger segments of the storage market have been slow in coming. As the tipping point draws nearer with ever-increasing momentum, let's take a look back at the long journey to the practical SSD, and a look forward at the likely future progress of this technology.

A brief history of flash memory

Solid-state memory has been around since very nearly the beginning of computing, but a great many of the details have changed since that earlier time when ferrous ring memory was programmed with copper coils. The first nonvolatile semiconductor memory technology even theoretically suitable for use as a disk was Electrically Erasable Programmable Read Only Memory (EEPROM), invented by Intel in 1978. Using floating-gate transistors to store bits of information, and with specific read, write, and erase circuitry for each cell, EEPROM achieved roughly the read performance of RAM with no volatility and the ability to be rewritten many times. It could, in theory, with the right device engineering, have been used as main storage for a PC.

There were serious problems, though: EEPROM was slow, it was expensive, it was power-consuming, and it was not durable. Erasing and rewriting bytes of EEPROM can individually take milliseconds (most of this time taken by the erase operation), so the full system wouldn't have been much faster than a hard disk. Moreover, over 2,500 of Intel's initial EEPROM module, the 2KB 2816, would have been required to match Seagate's 1980 5MB ST506, at a cost of many tens of thousands of dollars next to the ST506's $1500. Such an EEPROM proto-SSD would consume about 1.5 kW of electricity while under a full writing load, about a hundred times the consumption of the hard disk. Individual bits of EEPROM would burn out after about ten thousand write-erase cycles, which could be done to the entire disk in as little as three days of continuous writing. For these reasons, among many more, the idea of an EEPROM SSD was never even floated.

The next new technology to come down the pike, invented by Toshiba in 1980, announced in 1984, and commercialized by Intel in 1988, was a refinement on EEPROM which offered partial solutions to these problems. Called NOR Flash Memory, the new technology used vastly less power, and allowed more write/erase cycles before burnout. NOR flash omitted the circuitry to individually erase each bit, and instead it divided the available memory into blocks which could only be erased together, dramatically raising the speed of erase operations, but breaking byte addressability for write operations, and necessitating an awkward "read-the-block, erase it, write the modified block" operation for partial-block writes. The simplified circuitry also lowered overhead and die sizes, lowering costs dramatically. At this point, cost became the primary theoretical impediment to the development of SSDs, and a turning point was reached.

Of course, as time goes on, continuing development in semiconductor fabrication technology increases the speed, capacity, and durability of SSDs of the same design based on the same technology. Costs continually fall. But the price-competitive position of flash memory against hard disks isn't actually helped much by this. In 1997, hard disk prices were a factor of 30 lower than the spot price of flash memory, and by 2003 the difference was over 100-fold. Even Moore's law can't compete against hard disk price scaling, which, ever since the advent of Giant Magnetoresistive Heads, has outpaced it dramatically.

Nonvolatile memory in the age of NAND

To make flash more competitive with magnetic storage, new technologies have been popularized which change the layout of flash further. Optimized for low cost rather than speed, succeeding generations of Flash technology have been less about making Flash more suitable for disks than about bringing the advantages of NOR Flash to more markets by lowering its cost.

The first of these innovations, introduced in 1987 and commercialized in 1989, was NAND flash. The floating-gate transistors of NAND are oriented in a NAND gate, allowing lines of storage with failed transistors to function, and more dense packing of lines. However, this means that NAND reads are not byte-addressable. NAND is read in pages, which are larger than bytes but smaller than blocks. Between the saved read circuitry, denser packing, and lower process quality needs, the cost of NAND is much lower than that or NOR, and its density much higher. Write endurance is lower than NOR.

More than this, though, NAND turns read reliability into a statistical shell game. Individual reads have a much higher bit error rate than with any of the prior-discussed types of memory, so much more aggressive error encoding is involved in ensuring that NAND reads are accurate. Error correction begins with checksum bits in each page. ECC bits in each block, and between blocks, escalate the level of error protection.

The latest twist on Flash memory is MLC technology. All the systems we've discussed so far store information by toggling each cell between two states; a neutral and a charged state; this is a single-level cell, or SLC, design. In what has been referred to as multi-level cell (MLC) flash, each cell has a neutral voltage and three levels of charged state, for a total of four states capable of storing two bits of information. While this scheme allows more data storage per transistor—and is hence much cheaper—reading more finely differentiated voltages requires finer measurement, which is in turn slower and more error-prone. It also reduces the usable life of the Flash thus produced. So SLC is superior in terms of performance and lifecycle, while MLC is cheaper and denser. The performance and lifecycle gaps are dissipating as process technology improves, while the price and density gaps widen.

As the balance shifts toward MLC, higher levels of MLC technology become feasible. The key advances were first made by a small Israeli company called M-Systems. M-Systems released some of the first NAND flash drives for use in rugged industrial and military computers, the onboard systems for the Israeli Defense Force tanks and aircraft. A number of years ago, M-Systems invented techniques that allow even finer voltage level setting and measurement in NAND cells, allowing them to store three bits (eight levels) or even four bits (16 levels) per flash cell. Referred to as X3 and X4 technology, this technique takes the price-and-density vs. speed-and-durability tradeoff of MLC even further.

M-Systems' X3 and X4 technology is currently in the hands of Sandisk, which bought M-Systems in 2006 and gained exclusive control of the patents in an arbitrated dispute with Samsung. The concluding judgment in the dispute ruled that Sandisk wasn't bound by M-Systems' license agreement with Samsung, so now Sandisk and partner Toshiba are now ahead in X3 and X4 technology, while behind in process technology. The Intel-Micron Flash partnership, though, is advancing in X3, and in the long run, achieving higher levels of cell level density, possibly including partial bits, will be an ongoing competition among manufacturers, not a one-sided arrangement.

The continuing evolution of nonvolatile memory technology from EEPROM to NOR to NAND to MLC NAND to X3 and X4 NAND, along with process improvements, are pushing the price-per-bit for semiconductor memory products closer and closer to that of hard disk drives, while retaining, at least somewhat, the former's theoretical advantages in speed, latency, form factor, and ruggedness. Given this climate, the eventual penetration of SSDs into storage segments currently dominated by hard disks is effectively assured. The only questions are when and how it will happen.