



Table of Contents





Of course, capacity is not always the most important measure for a hard drive, and advances in other areas have slowed considerably. (Yes, we're now seeing spindle speeds of 15,000RPM , but with that modest increase come power, noise and reliability issues.) As the chart above illustrates, CPU performance went from 1 MIPS to 16,800 MIPS between 1988 and 2008. Over the same period HDD performance increased by only 11 times. That disconnect -- the performance gap between processor and storage access times -- is often why your computer feels so frustratingly slow. So while manufacturers continue to wring every last bit of usability out of magnetic-drive storage, the writing's on the wall: the king's reign is coming to an end.







Early consumer drives and a maturing technology Return to top







To quickly recap: SSDs had been around in various forms since the early 1950s, with DRAM emerging as the preferred technology in the 1980s, but only in a limited number of specialty applications. By the 1990s, flash memory had proven itself a capable storage medium in, for example, digital cameras. And even though writing to flash was not as fast as writing to DRAM, it now seems inevitable that someone would exploit the cheaper medium to offer Flash-based storage for the high-performance enterprise market. That meant not just for systems designed to operate in harsh environments -- like the seismic data acquisition drive sold by Texas Memory Systems -- but, increasingly, database and web servers. And, eventually, consumer PCs.



Before that happened, though, an Israeli company named M-Systems developed an innovative hardware design for flash memory. In 1995, it debuted the DiskOnChip, which, as you can see above, was pretty much what the name indicates. Thanks to proprietary software called True Flash File System, the chip appeared as a hard drive to the host computer. TrueFFS also implemented the features we'd associate with today's SSD controllers, including error correction, bad block re-mapping and wear leveling. (Don't worry, we'll get to those.) This was, in essence, the first flash drive; four years later, M-Systems adapted the idea to create the first USB flash drive, called DiskOnKey, since the company saw it as a hard disk you could carry on a keychain. To put the phrase "hard disk" into 1999's context, you could buy 8, 16 or 32MB versions.



DiskOnChip wasn't aimed at consumers, but it proved the viability of NAND-based disks. That meant competition for RAM-based SSDs, and other firms, often small manufacturers, began to experiment with different form factors and configurations. Adtron, Cenatek, Atto, SimpleTech, Memtech and others all took their shots at industrial- and military-grade SSDs. In 1999, BiT Microsystems introduced an 18GB drive: the unfortunately named SUX35 was its first Ultra SCSI-compatible disk.



Meanwhile, Bill Gates envisioned the same technology eventually reaching mainstream consumers. Unveiling Microsoft's Tablet PC in 2002 (and after sharing the stage with first Amy Tan, then Rob Lowe) he said, "Eventually even the so-called solid state disks will come along and not only will we have the mechanical disks going down to 1.8 inch, but some kind of solid state disk in the next three to four years will be part of different Tablet PCs."



Gates had the timeline about right: in 2005 Samsung entered the fray, the first multi-billion dollar company to throw its hat in the ring. Sammy offered 1.8-inch and 2.5-inch drives, and in 2006 it introduced the first high-volume Windows XP notebook with flash-based SSD storage. The Q30-SSD, pictured to your right, came with 32GB of NAND and cost a blistering $3,700. As we To quickly recap: SSDs had been around in various forms since the early 1950s, with DRAM emerging as the preferred technology in the 1980s, but only in a limited number of specialty applications. By the 1990s, flash memory had proven itself a capable storage medium in, for example, digital cameras. And even though writing to flash was not as fast as writing to DRAM, it now seems inevitable that someone would exploit the cheaper medium to offer Flash-based storage for the high-performance enterprise market. That meant not just for systems designed to operate in harsh environments -- like the seismic data acquisition drive sold by Texas Memory Systems -- but, increasingly, database and web servers. And, eventually, consumer PCs.Before that happened, though, an Israeli company named M-Systems developed an innovative hardware design for flash memory. In 1995, it debuted the DiskOnChip, which, as you can see above, was pretty much what the name indicates. Thanks to proprietary software called True Flash File System, the chip appeared as a hard drive to the host computer. TrueFFS also implemented the features we'd associate with today's SSD controllers, including error correction, bad block re-mapping and wear leveling. (Don't worry, we'll get to those.) This was, in essence, the first flash drive; four years later, M-Systems adapted the idea to create the first USB flash drive, called DiskOnKey, since the company saw it as a hard disk you could carry on a keychain. To put the phrase "hard disk" into 1999's context, you could buy 8, 16 or 32MB versions.DiskOnChip wasn't aimed at consumers, but it proved the viability of NAND-based disks. That meant competition for RAM-based SSDs, and other firms, often small manufacturers, began to experiment with different form factors and configurations. Adtron, Cenatek, Atto, SimpleTech, Memtech and others all took their shots at industrial- and military-grade SSDs. In 1999, BiT Microsystems introduced an 18GB drive: the unfortunately named SUX35 was its first Ultra SCSI-compatible disk.Meanwhile, Bill Gates envisioned the same technology eventually reaching mainstream consumers. Unveiling Microsoft's Tablet PC in 2002 (and after sharing the stage with first Amy Tan, then Rob Lowe) he said, "Eventually even the so-called solid state disks will come along and not only will we have the mechanical disks going down to 1.8 inch, but some kind of solid state disk in the next three to four years will be part of different Tablet PCs."Gates had the timeline about right: in 2005 Samsung entered the fray, the first multi-billion dollar company to throw its hat in the ring. Sammy offered 1.8-inch and 2.5-inch drives, and in 2006 it introduced the first high-volume Windows XP notebook with flash-based SSD storage. The Q30-SSD, pictured to your right, came with 32GB of NAND and cost a blistering $3,700. As we noted at the time , that was about a $900 premium over its magnetic-drive sibling. The company's 7-inch Q1 UMPC also offered the solid-state option -- upping the price to $2,430. Needless to say, sticker shock certainly limited the appeal of these early efforts. Nevertheless, they were an important step as SSDs crept closer to the mainstream.







Why SSDs? Return to top

Here's a good place to take a break from our story and examine more closely the appeal of SSDs. As we've alluded, they promise greater speed, power savings and quiet operation. Today we sometimes think of the performance boost as most persuasive, but for early laptop users, energy efficiency mattered just as much, if not more. (Ask someone who's replaced his or her laptop HDD with an SSD about the benefits to battery life.) But surely for Samsung to charge a $900 premium -- and for someone to pay it -- the impact must be great indeed. In fact, skeptics continue to ask that question on message boards throughout the internet: how good could the technology be, to justify the smaller capacities and higher prices? In one of his typically comprehensive and informative reviews, Anand Lai Shimpi also emphasized performance, responding, "You don't think they're fast, until I take one away from you." Since we can't have a massive, Oprah-style SSD giveaway (and later cruelly snatch them back), we'll walk through how they work, and why they outperform HDDs.

Remember, today's hard drives are approaching two physical limits: data density, which defines how much information can be written on a given area, and spindle speed, or how fast the platter spins. Greater data density gives us higher capacity drives, but is limited by the superparamagnetic effect, barring new approaches. Spindle speed is one way of increasing throughput, but that's topped out at 15,000RPM; rumors of a



Solid-state drives have none of the limitations associated with moving parts. There's no spin-up time, because there's nothing to spin; because there's no read-write head and all parts of the drive are equally accessible, latency and seek times are constant and low. The lack of moving parts means less power consumption, since the drives doesn't have to move heads or spin platters. And the probability of mechanical failure in the form of say, a head crash, is non-existent. There's no head to crash.



All of which makes SSDs sound radically different from their HDD relatives -- and they are. Those differences have both pros and cons, which we'll discuss momentarily, but first let's get a handle on how the flash memory underlying SSDs actually works. Remember, today's hard drives are approaching two physical limits: data density, which defines how much information can be written on a given area, and spindle speed, or how fast the platter spins. Greater data density gives us higher capacity drives, but is limited by the superparamagnetic effect, barring new approaches. Spindle speed is one way of increasing throughput, but that's topped out at 15,000RPM; rumors of a 20,000RPM Velociraptor seem to have come to naught, perhaps because WD rethought the market. Perpendicular recording increases data density and throughput, assuming the same spindle speed, but as you can see from the diagram on the left, the dependence on moving parts imposes other limitations. For example, if you've ever let your hard drive spin down, then tried to read from it, you've probably noticed a small but perceptible lag. That's the drive spinning up and the read-write head moving across the platter to find your data.Solid-state drives have none of the limitations associated with moving parts. There's no spin-up time, because there's nothing to spin; because there's no read-write head and all parts of the drive are equally accessible, latency and seek times are constant and low. The lack of moving parts means less power consumption, since the drives doesn't have to move heads or spin platters. And the probability of mechanical failure in the form of say, a head crash, is non-existent. There's no head to crash.All of which makes SSDs sound radically different from their HDD relatives -- and they are. Those differences have both pros and cons, which we'll discuss momentarily, but first let's get a handle on how the flash memory underlying SSDs actually works.

Here's a basic illustration of a flash cell. Notice it shares no similarity with the hard drive diagram above: we're at the lowest layer of storage, where the most pressing question is how to represent ones and zeros. HDDs do this magnetically; flash does it using electrons. The number of electrons stored in the cell affect its threshold voltage: when the threshold voltage reaches four volts, that reads as zero. Anything less reads as a one. (In flash parlance, a zero is "programmed" and a one is "erased.") And the electrons are trapped in the gate even if power's lost, making this non-volatile memory.

In this basic configuration, each cell stores a single bit: it reads as either a zero or a one. That, logically enough, is known as Single-Level Cell (SLC) flash. However, using the exact same physical medium, we can store more bits simply by subdividing the threshold voltage ranges.



Now we're recognizing four distinct threshold ranges rather than two. That means we can store two bits rather than one. Remember, we're using the same flash as before, so this would seem to be an advantage -- we've doubled the capacity without raising our cost per bit. As always, though, there's a trade-off. First, it's going to take longer to read and write to MLC flash: typically about twice as long to read and three times longer to write. However, we're talking about microseconds, so the difference is negligible for most applications.



More important is the problem of memory wear. Unlike HDD platters, which in theory can be written and re-written an infinite number of times, flash memory can only be programmed/erased a limited number of times before it's no longer writeable. It's called a P/E cycle limit, and with the earlier 50nm chips that number was about 100,000 times when used as SLC memory. Because MLC degrades faster, after about 10,000 P/E cycles. And as NAND structures have gotten smaller, so too have the number of P/E cycles they can undergo before wearing out. Sure, the numbers are still so high that the average, everyday user would consider the drive obsolete long before actually hitting the P/E cycle limit -- at which point the drive becomes read-only, while preserving the existing data. But when it comes to design and manufacturing, the limitation is very real, and needs a solution. Or rather, several solutions.



The challenges of SSDs Return to top



As we've shown, flash SSDs present unique challenges. That's one reason established player Seagate



Despite Intel's



Here's where two aspects of flash memory converge to create a unique problem. The first, as we mentioned, is memory wear. No HDD controller has to account for the predictably finite lifespan of its underlying magnetic media. Flash SSDs do: they have to limit the number of P/E cycles in order to keep the drive in tip-top shape. Not only that, but remember that our P/E limit is per cell. It's roughly analagous to the problem of bad sectors, but in this case it's predictable and inevitable for every cell. Knowing this, controller designers want each cell to wear evenly -- spread the P/E cycles over the drive, rather than programming and erasing the same cells until they become unusable. This is called wear-leveling, and it's further complicated by our second aspect: the physical architecture of flash memory.



Remember the cells we introduced above. They're the smallest element of storage: in SLC flash they store a single bit, and in MLC flash they store two. Those cells are grouped into pages, typically 4KB in size (see the illustration to your left). A page is the smallest structure that's readable/writable in an SSD. Pages are grouped into blocks, which are the smallest erasable structure in an SSD. Now you might be seeing a problem. Why read, write and erase? HDDs don't have a separate erase function at the physical level. When you delete a file, that simply means removing a pointer. There's no action taken on the hard drive, no "erase" function. Your data remain magnetically encoded on the drive, which will eventually overwrite the "free" space.



But flash doesn't work that way. It's a different medium with different rules. SSD makers choose to play by these rules because the upside is vastly improved performance.



When you erase a file on an SSD, the process is initially the same: nothing happens. At least not at the physical level. No data disappears. Let's say you deleted a 4KB file that got written to a new page. That page is now free, as far as your operating system's concerned. Only if you have to overwrite that page will the SSD have to do some work. But when it does, it has to do more than the HDD, and that's the key to understanding performance degradation.



As we said, the HDD simply overwrites the sector with new information. The SSD, though, can't just overwrite a page. It has to erase the page first. Now remember the asymmetry between readable/writeable and erasable structures. To erase a page, you have to erase the entire block containing it. What about the other pages in the block? Well, they have to be read to a buffer, then written back after the block's been erased.



You can see how this leads to a drop in performance. You just tried to write a page, but in fact you ended up reading a block, erasing it, then re-writing it with the new data. What looks like a simple write operation is in fact a three step process. SSDs try to avoid this by writing to open pages first, but as space fills up the controller has fewer options. There's simply nowhere else to put the data without doing this read-erase-write shuffle.



Now, manufacturers recognize this and try to mitigate it in numerous ways. One is over-provisioning: including more flash on the drive than the user actually sees. Intel's X-25M, for example, shipped with 7.5-8% extra flash. The spare space means more open pages that won't require read-erase-write. But that just staves off the inevitable. If you keep filling up the drive, you will hit the performance barrier. The question is how well your drive will cope.



Another helpful approach is the TRIM command, now implemented in most modern operating systems. This forces the SSD to manage deleted files right away, rather than wait for them to be overwritten. Delete a file and the OS tells the controller to copy the block to cache, erase it, and rewrite the remaining pages. That means pages within a partially used block are freed up automatically, during the delete phase rather than during an overwrite. Of course, sometimes you have to overwrite a file, say when you save a new version of it. You'll still suffer the read-erase-write penalty then, but TRIM can alleviate some of the pain. Some drives feature other methods of



But wait a minute. Rewriting more blocks eats into our limited P/E cycles, right? That's right. TRIM and other methods of garbage collection contribute to a problem called write amplification. That simply means you're writing to the SSD more often you should; ideally, the write amplification ratio would be 1, meaning the amount of data written to the flash memory is exactly the same as that written to the host. This remained an ideal, though Intel's early drives came close to reaching it. This number seemed a threshold, too; after all, how could you write less data to the flash memory than to the host? One company figured out how, and we'll pick up the story there. As we've shown, flash SSDs present unique challenges. That's one reason established player Seagate came to the party late , and Western Digital eventually just bought its way in . (We also suspect uncertainty about the viability of the market and concern about cannibalizing sales of their mechanical drives also had a little something to do with it.) And it's why a late-entering semiconductor giant named Intel -- presumably with the time and expertise to learn from others' mistakes -- could release its first drives to nearly universal acclaim, only to get hit later with claims of drive slowdown Despite Intel's initial denials , the drives did see a performance drop over time. Specifically, as the drives filled up, write speeds slowed, sometimes drastically. This wasn't just a problem with Intel's offerings, either. Once reviewers knew to look for it, they found it common to almost every SSD: as free space decreased, write performance took a hit. Most drives were still faster than conventional HDDs, but the difference was noticeable between a brand-new SSD and a "used" one. In hindsight the reason seems obvious, but it wasn't immediately so in 2009.Here's where two aspects of flash memory converge to create a unique problem. The first, as we mentioned, is memory wear. No HDD controller has to account for the predictably finite lifespan of its underlying magnetic media. Flash SSDs do: they have to limit the number of P/E cycles in order to keep the drive in tip-top shape. Not only that, but remember that our P/E limit is per cell. It's roughly analagous to the problem of bad sectors, but in this case it's predictable and inevitable for every cell. Knowing this, controller designers want each cell to wear evenly -- spread the P/E cycles over the drive, rather than programming and erasing the same cells until they become unusable. This is called wear-leveling, and it's further complicated by our second aspect: the physical architecture of flash memory.Remember the cells we introduced above. They're the smallest element of storage: in SLC flash they store a single bit, and in MLC flash they store two. Those cells are grouped into pages, typically 4KB in size (see the illustration to your left). A page is the smallest structure that's readable/writable in an SSD. Pages are grouped into blocks, which are the smallest erasable structure in an SSD. Now you might be seeing a problem. Why read, write and erase? HDDs don't have a separate erase function at the physical level. When you delete a file, that simply means removing a pointer. There's no action taken on the hard drive, no "erase" function. Your data remain magnetically encoded on the drive, which will eventually overwrite the "free" space.But flash doesn't work that way. It's a different medium with different rules. SSD makers choose to play by these rules because the upside is vastly improved performance.When you erase a file on an SSD, the process is initially the same: nothing happens. At least not at the physical level. No data disappears. Let's say you deleted a 4KB file that got written to a new page. That page is now free, as far as your operating system's concerned. Only if you have to overwrite that page will the SSD have to do some work. But when it does, it has to do more than the HDD, and that's the key to understanding performance degradation.As we said, the HDD simply overwrites the sector with new information. The SSD, though, can't just overwrite a page. It has to erase the page first. Now remember the asymmetry between readable/writeable and erasable structures. To erase a page, you have to erase the entire block containing it. What about the other pages in the block? Well, they have to be read to a buffer, then written back after the block's been erased.You can see how this leads to a drop in performance. You just tried to write a page, but in fact you ended up reading a block, erasing it, then re-writing it with the new data. What looks like a simple write operation is in fact a three step process. SSDs try to avoid this by writing to open pages first, but as space fills up the controller has fewer options. There's simply nowhere else to put the data without doing this read-erase-write shuffle.Now, manufacturers recognize this and try to mitigate it in numerous ways. One is over-provisioning: including more flash on the drive than the user actually sees. Intel's X-25M, for example, shipped with 7.5-8% extra flash. The spare space means more open pages that won't require read-erase-write. But that just staves off the inevitable. If you keep filling up the drive, you will hit the performance barrier. The question is how well your drive will cope.Another helpful approach is the TRIM command, now implemented in most modern operating systems. This forces the SSD to manage deleted files right away, rather than wait for them to be overwritten. Delete a file and the OS tells the controller to copy the block to cache, erase it, and rewrite the remaining pages. That means pages within a partially used block are freed up automatically, during the delete phase rather than during an overwrite. Of course, sometimes you have to overwrite a file, say when you save a new version of it. You'll still suffer the read-erase-write penalty then, but TRIM can alleviate some of the pain. Some drives feature other methods of garbage collection , all with the same goal: free up deleted pages before it's necessary to overwrite them.But wait a minute. Rewriting more blocks eats into our limited P/E cycles, right? That's right. TRIM and other methods of garbage collection contribute to a problem called write amplification. That simply means you're writing to the SSD more often you should; ideally, the write amplification ratio would be 1, meaning the amount of data written to the flash memory is exactly the same as that written to the host. This remained an ideal, though Intel's early drives came close to reaching it. This number seemed a threshold, too; after all, how could you write less data to the flash memory than to the host? One company figured out how, and we'll pick up the story there.



Not all are created equal Return to top



That's Intel's X-25M,



Unfortunately, that didn't always mean a great experience for consumers. JMicron had begun offering its SSD controllers to smaller, independent vendors such as OCZ, Super Talent and Patriot Memory. The controller let those companies use the cheaper MLC flash, while Samsung, for one, stuck with the more expensive, lower-capacity SLC. But users of JMicron's early controllers found serious problems. While fast in benchmarks for sequential reading and writing, under real-world conditions the drives stuttered unacceptably.



The problem revealed some narrow thinking at JMicron. Optimized for sequential reading and writing, its controllers choked badly when it came to random 4k writing. But most users don't spend their days reading and writing sequential files. Nor do they buy storage based solely on benchmarks. Instead, most have to use multi-tasking operating systems that -- you guessed it -- write a lot of small files. The JMicron hiccup occurred when writing those files interfered with other applications. The problem should have been caught early, before drives shipped to consumers, but instead buyers became unwilling beta testers.



Chipzilla's entrance made things interesting. The company had the expertise to build an impressive first controller and the industry pull to secure flash at bargain prices. Even so, many were surprised when it launched one of the world's fastest drives. A two-pronged attack -- the X-25M for consumer use and the X-25E sporting SLC for enterprises -- put Intel arguably at the top of the heap, performance-wise. That's Intel's X-25M, launched in late 2008. Sandisk, Toshiba and TDK had already entered the market , which really began to balloon in 2007. It recalled the early days of desktop hard drives, with players large and small trying to outdo one another.Unfortunately, that didn't always mean a great experience for consumers. JMicron had begun offering its SSD controllers to smaller, independent vendors such as OCZ, Super Talent and Patriot Memory. The controller let those companies use the cheaper MLC flash, while Samsung, for one, stuck with the more expensive, lower-capacity SLC. But users of JMicron's early controllers found serious problems. While fast in benchmarks for sequential reading and writing, under real-world conditions the drives stuttered unacceptably.The problem revealed some narrow thinking at JMicron. Optimized for sequential reading and writing, its controllers choked badly when it came to random 4k writing. But most users don't spend their days reading and writing sequential files. Nor do they buy storage based solely on benchmarks. Instead, most have to use multi-tasking operating systems that -- you guessed it -- write a lot of small files. The JMicron hiccup occurred when writing those files interfered with other applications. The problem should have been caught early, before drives shipped to consumers, but instead buyers became unwilling beta testers.Chipzilla's entrance made things interesting. The company had the expertise to build an impressive first controller and the industry pull to secure flash at bargain prices. Even so, many were surprised when it launched one of the world's fastest drives. A two-pronged attack -- the X-25M for consumer use and the X-25E sporting SLC for enterprises -- put Intel arguably at the top of the heap, performance-wise.

Here's where Here's where SandForce comes in. The company entered the SSD controller ring in 2009, emerging from stealth mode with a promising cache of proprietary technologies. Their big breakthrough? A quartet of tweaks that allowed MLC to replace the more expensive SLC without sacrificing durability or speed. Since MLC is twice as dense as SLC, that meant doubled capacity. It also let the smaller firms compete with SLC powerhouses like Samsung and Intel, who had privileged access to high-grade NAND chips.

OCZ shipped some of the first drives featuring a SandForce controller, and basically



How'd SandForce go toe-to-toe with Intel? First, they had an intelligent data-monitoring system called DuraWrite. Remember, the problem with MLC is that doubling the NAND storage capacity shortens its P/E cycle limit by about a factor of ten. SandForce reasoned that if you wanted to make MLC competitive with SLC, you just had to reduce the amount of writing actually taking place -- considerably. DuraWrite does just that, through a combination of compression and deduplication. That means less redundant data gets written, lowering the write amplification ratio to 0.5, SandForce claims. (That was enough to



SandForce has arguably struck a balance between price and performance, one that allows them to serve both consumer and enterprise markets, and it seems to be paying off. Despite making no drives of its own, the company is one of the most recognizable names in SSDs, and was just bought OCZ shipped some of the first drives featuring a SandForce controller, and basically hit the limit on 3Gbps SATA, hitting 265MB/s on a 2MB sequential read. Granted, most other high-end SSDs also bumped up against that SATA ceiling: high speeds are a large part of the appeal, and are virtually innate to the technology. Where SandForce really impressed, though, was in the sequential write tests, reaching 252MB/s and blowing even Intel's enterprise offering out of the water.How'd SandForce go toe-to-toe with Intel? First, they had an intelligent data-monitoring system called DuraWrite. Remember, the problem with MLC is that doubling the NAND storage capacity shortens its P/E cycle limit by about a factor of ten. SandForce reasoned that if you wanted to make MLC competitive with SLC, you just had to reduce the amount of writing actually taking place -- considerably. DuraWrite does just that, through a combination of compression and deduplication. That means less redundant data gets written, lowering the write amplification ratio to 0.5, SandForce claims. (That was enough to catch IBM's eye .) Other enhancements targeted reliability, power consumption and, obviously, those lightning-quick read and write times.SandForce has arguably struck a balance between price and performance, one that allows them to serve both consumer and enterprise markets, and it seems to be paying off. Despite making no drives of its own, the company is one of the most recognizable names in SSDs, and was just bought just bought by LSI for $370 million. Right now, SandForce looks to be in the pole position for the solid-state innovation race.





To better understand the place of SSDs in today's storage landscape, it's worth recounting some history. Cast your mind back to a time when computers weighed tons and were delivered by forklifts. One such contraption, the 305 IBM RAMAC, debuted in September 1956. That typically cuddly acronym stood for "Random Access Method of Accounting and Control," and Big Blue's system leased for $3,200 a month. For that you got a console, processing unit, printer, card punch and massive power supply -- all delivered by cargo plane, as long as you had the 30- by 50-foot air-conditioned room needed to house it.Most important to our story, though, RAMAC shipped with the IBM 350 Disk Storage Unit. Back then, drives didn't need fearsome names like VelociRaptor and Scorpio to distinguish themselves; in fact, IBM's was the first hard disk drive, marking a revolutionary moment in computer science.So what was this mechanical marvel? Similar to two contemporary technologies, tape and drum storage, it relied on a moving, magnetically charged medium: 50 aluminum disks, or platters, each 24 inches in diameter. Stacked in a cylinder, they spun at 1,200RPMs while a pair of read heads moved vertically to the right platter, then horizontally to the right track. IBM saw this random access capability as the system's greatest selling point.Here's how it worked: imagine a stack of 50 vinyl records, each separated by a space thin enough for a phonograph needle to pass between them. (If records are as foreign to you as papyrus scrolls, you can substitute CDs.) To hear a particular song, you only need to find the right record and right track; you don't, as with a tape, have to fast forward or rewind through all those unrelated songs. Random access, the ability to begin reading from any point on the medium, dramatically reduces the time it takes to find data; the seek time on the 350 was about 600 milliseconds.The 350 stored about 4.4MB. The story goes that it could have held more -- after all, you could always add more platters -- but the marketing department couldn't figure out how to sell any more MBs, thereby beginning the long tradition of "it's all the space you'll ever need!" Even so, it soon came with an optional second drive. For the next two decades access times and capacity continued to improve, and in 1973 Big Blue introduced a more recognizable precursor to modern hard disk drives (HDDs). The IBM 3348 Data Module was a sealed cartridge containing the platters, spindle and head-arm assembly. The 1970's version of removable storage, it came in 35MB and 70MB versions.The magnetic platter concept pioneered and refined by IBM laid the groundwork for decades of fast, cheap and reliable data storage. Honed and miniaturized, it's the same basic technology found in HDDs around the world today.IBM continued apace, increasing the size and speed of its drives. In 1980, it introduced the first 1GB model, as big as a refrigerator and weighing about 550 pounds. Oh, and it cost $40,000. (In 1980 dollars: we'll let you do the math.) The company made business machines at business prices, but a sea change was coming.That same year, Seagate Technology introduced the first 5.25-inch hard drive, the ST-506, pictured above. Founded by former IBMers, including the legendary Al Shugart, who'd helped develop the RAMAC, Seagate targeted the nascent PC market with smaller, cheaper drives. The ST-506 offered 5MB for $1500; thanks to the endorsement of Big Blue (which had entered the microcomputer in response to Apple's early success), its interface soon became the de facto standard.With a growing market for personal computers, innovation flourished. Companies such as Western Digital, Quantum, Maxtor, Connor Peripherals, HP and Compaq all competed to develop the next bigger, faster drive. During the 1980s, capacity increased by as much as 30% each year; in the next decade the number hit 60%. By 1999 storage capacity was doubling every 9 months.Most of these gains, though, came from refining the underlying technology, not fundamentally altering it. Rodime introduced the first 3.5-inch hard drive in 1983, establishing the new standard form for desktop storage. Since then, manufacturers have sought to squeeze more and more data into that space, or into later 2.5-inch disks. Even smaller sizes -- that 1.8-inch Toshiba to your right, for example -- still rely on spinning platters. IBM's 1-inch Microdrive shrunk the tech even more, and for some time competed with CompactFlash by offering greater capacity.To continue increasing capacity, manufacturers have to keep shrinking the magnetic grains on those platters. Smaller grains means more bits per square inch, usually called areal density; upping the areal density means you can store more data in the same physical space, and they're still finding ways to do that . Hitachi's perpendicular recording offered another approach to boosting areal density, one soon taken up by other manufacturers.Eventually, though, magnetic storage runs into fundamental laws of physics. In this case, those immutable rules are represented by the superparamagnetic effect (SPE). Once we shink magnetic grains below a certain threshold, they become susceptible to random thermal variations that can flip their direction. What exactly does that mean? Writing to an HDD means changing the magnetization of grains, marking them as ones or zeroes. As long as that magnetization remains ordered, the grains can be read -- the data can be retrieved. But if they start randomly flipping directions, you no longer have ones or zeroes. Coherent, readable information dissolves into a bunch of magnetized grains. Perpendicular recording is one way to stave off the SPE limit, as is heat-assisted magnetic recording used in conjunction with bit-patterning. More exotic solutions are in the pipeline as well, and some manufacturers just keep adding platters.Given all the recent attention, you might think solid-state disks just suddenly appeared. In fact, like their electromechanical brethren, they date back to the 1950s. Indeed, the ancestors to today's SSDs predate platter-based drives. Magnetic core memory, seen above, is one type of early storage that required no moving parts. It too emerged from IBM labs and often served as main memory in the company's mainframes. But partly due to cost -- it could only be handmade, with workers using microscopes to see the tiny filaments they were threading -- magnetic core memory was largely replaced by drum storage, which, you'll recall, eventually led to HDDs.Still, solid-state memory had a place in many niche markets, especially where high durability was required. NASA spacecraft relied on it, and in 1978 Texas Memory Systems began selling oil companies a 16KB RAM SSD as part of a seismic data acquisition system. That was also the year that StorageTek introduced the first modern SSD; with a maximum capacity of 90MB, it cost $8,800 / MB. The high price tag made it and similar RAM-based disks appealing to only a select few. Equally important, DRAM's speed came at a cost: it was volatile memory, requiring constant power to retain its data. That worked for high-speed, always-on applications, but not for home users. Today, DRAM still fills its role as main memory, but serves as storage in only a small number of cases.It took the invention of flash memory to really push SSDs toward the mainstream. Dr. Fujio Masuoka developed it in 1980 while at Toshiba. Much to its later chagrin, Tosh failed to capitalize on his work, leaving it to Intel to commercialize. Chipzilla positioned flash as a an option for BIOSes and other firmware, but soon saw another application: removable storage. Intel's MiniCard joined a proliferation of sizes and formats, including Toshiba's SmartMedia (generically referred to as a solid-state floppy-disk card, and often sold with the adapter seen here), CompactFlash, Secure Digital (and its later variations) and Sony's Memory Stick. All relied on flash.So how does flash work, and what makes it different from traditional magnetic drives? The short answer is that instead of storing data magnetically, flash uses electrons to indicate ones and zeroes. You might already recognize why this is a plus: no moving parts. That means no noise, no head crashes, and greater energy efficiency since you don't have to move a mechanical arm. And unlike DRAM, it's non-volatile -- it doesn't need constant power to retain information. These advantages are obvious, but in the early going, when placed next to cheap and capacious hard drives, flash still looked like a niche product, useful mainly in digital cameras and other consumer electronics.For a more in-depth introduction to the physical properties of flash, consider this 12-minute (!) video from SanDisk's SSD Academy. Don't worry if you need to skip it, though; we'll address the most salient details once we start looking at flash's rise as an alternative to the traditional HDD.Of course, that could change at any time. The SSD market is still a bit of a Wild West: the technology hasn't been perfected, and we've all heard plenty of horror stories about recalls faulty firmware and other problems . For some users, though, the reward is worth the risk. It also remains to be seen whether a couple of companies will emerge victorious, as did Seagate and Western Digital with HDDs, or whether SSDs will continue to come from a wide variety of manufacturers. As we all know, today's winners can be tomorrow's losers. The only thing we can say with much certainty is that SSDs show a lot of promise, and we're just beginning to tap it.[Image credits: Ed Thelen OEMPCWorld and Anandtech