Last week's EmTech 09 meeting played host to a panel discussion on the future of data storage. All three of the panelists were from companies that have a poorly known product on the market, and each of them discussed improvements that are in the pipeline, which we'll cover towards the end of this article. But they also provided a more general overview of the challenges facing storage technology at a time when data production is beginning to outstrip our ability to cope with it.

Ed Doller, of memory maker Numonyx, put things into perspective by discussing the launch of the iPhone 3GS. The hardware itself doesn't store all that much, but its capabilities led to downstream issues: within a few weeks of its release, mobile uploads of videos to YouTube had shot up by roughly 400 percent, and it's likely that other data-intensive activities will follow personal video before very long.

Managing the explosion of data brings its own challenges, not the least of which is maintaining the ability to read older data formats. Peter Lorraine of GE's Global Research group said that about two-thirds of the data obtained by NASA's Viking landers is now unreadable, and mentioned that many hospitals are on the verge of similar issues with patient medical records.

Doller set up the problem nicely: everyone wants more bandwidth and less latency. They're very different challenges, but each have a solution in the fact that a single interface, like SATA, can provide an abstraction to multiple pieces of hardware, often based on several different technologies.

All hard drives have fast RAM caches providing quick access to a data subset stored on the disk itself. In the same way, huge working sets can be held in a large cache that's backed by a massive drive array. The relative amounts of high-speed volatile and slower, stable storage can be tailored according to the application, balancing the trade-offs among cost, reliability, power use, density, and performance.

In many cases, the abstraction provided by hardware interfaces can be essential for the function of the device. Doller described NAND flash memory as a bit too noisy and error-prone to use directly, but noted that these issues could be masked by error correction algorithms run in the interface hardware itself.

The current generation of next-gen tech

With the set of five trade-offs laid out by Doller in mind, each of the panelists discussed where their technologies fit into the emerging storage picture.

Slow and stable: Optical disk media has perpetually lagged magnetic disks when it comes to speed, but it has two key advantages: given the right starting materials, it's got a lifetime of decades or more, and it's certainly faster than the current choice for long-term archiving, tape. GE's Peter Lorraine said that 50 million terabytes of tape is sold every year, so there's significant room on the market. It also has the potential to reach very high densities, as the only physically limiting factor is the wavelength of the light used to read and write to the media.

His group sees optical's challenge as increasing the density of the storage without sacrificing backwards compatibility, and the solution they've arrived at is holographic storage. Those of you who are thinking of 3D images can stop; the individual points in these disks create very simple interference patterns, rather than a complex image. But these patterns can easily be stacked on top of each other, as reading them out doesn't require the same sort of reflective process used in current optical media. The net result is a DVD-sized disk that can hold 500GB of data, with further refinements possible. "100 layers of Blu-ray-like storage is where things are headed," Lorraine said.

The drive mechanism looks similar to current generations of optical storage, allowing the drives to maintain backwards compatibility. Lorraine also said that it's possible to create masters that can be replicated to disks in about 5 to 10 seconds, meaning that this may eventually make its way to consumer devices.

Big but fast: Saied Tehrani of Everspin gave a brief description of the company's MRAM technology, which is a spin-off of intellectual property generated by Freescale. It's a mixture of standard silicon and magnetic materials; as he described it, it is structured much like standard DRAM, but with a magnetic material replacing the capacitor. Two magnetic layers flank a barrier to tunneling electrons. When the orientation of both magnets is identical, it becomes easier for electrons to cross the barrier. Everspin is currently producing 16MB MRAM modules, and Tehrani laid out some impressive figures for them: reads and writes occur with 35ns latencies, it can last for an essentially unlimited number of cycles, data is retained for at least 20 years even with the power off, and it can be radiation hardened for use in space.

With all that going for it, why aren't we using it? In a word, density. MRAM has nothing that compares with the gigabytes that can be stored in the latest flash modules. Still, Everspin is forging ahead with plans for next-generation technology based on the spin momentum of electrons. He expects that chips based on this technology will offer 10ns access times and use one-fifth the power per bit of the equivalent flash technology. Until the density problem gets sorted out, he suggested that MRAM will likely be limited to specialty cases, such as storing file system metadata in RAID controllers.

A change of phase: Numonyx appears to be in a similar position to Everspin, in that it's shipping a form of RAM that promises very high speeds and long-term stability, but is currently doing so on an older process technology. Right now, the company is making phase change memory with 90nm features. Its bet on this technology apparently received a worrisome bit of validation later last week, when Samsung announced it was ready to start mass production of the material.

Phase change memory relies on a class of alloys called Chalcogenides, which can adopt crystalline or amorphous forms; the crystalline form provides low resistance to currents. Switching between the two states can be done simply by heating the alloy and carefully controlling the cooling process. Fortunately, "carefully controlled" doesn't mean "slow"—Doller said that his company has phase change devices with 17 times the access speeds of SSDs.

The bits themselves are comprised of a top electrode coated in an alloy called GST (for its components, Ge-Sb-Te). Below that is a combination electrode (for reading the bit's state) and resistor, which can provide the focal heating needed to flip it. As with Tehrani, Doller expects phase change will be relegated to specialty uses, but his company is positioning it for when NAND flash runs up against noise limits at smaller feature sizes.

A long haul for new tech

I asked the panelists what they thought about some of the potential memory technologies that had appeared in the research literature over the past couple of years, which involve things like carbon nanotubes or atomic force microscopes. Everybody was uniformly excited about the potential that these demonstration projects held, but recognized that they're just the latest in a long list of things that have been proposed for data storage.

To make it to market, it's not enough to simply have fast, compact hardware. It needs to be produced at scale, and the mass produced hardware has to be very, very reliable. A lot of technologies have flunked this test, or waited decades for specific bottlenecks to be overcome. Even in the best-case scenarios, it's typically a decade between development in the lab and mass production, so you can usually expect about eight years after the first publications appear.

It's a caution worth keeping in mind, as it seems a publication that describes something with the potential to revolutionize storage seems to show up in the scientific journals every few months.