The latest generation of SSDs is out in full force. Drives are widely available from a multitude of vendors, firmware issues have for the most part been resolved, and competition has driven prices dangerously close to dollar-per-gigabyte territory. If you haven’t already added a solid-state drive to your system, you should probably be thinking about it—if not salivating at the prospect.

There is certainly no shortage of lust-worthy SSDs among this crop of fresh contenders. The sheer volume of different options can be a little daunting, though. As the number of SSD vendors has grown, so has the size and complexity of their solid-state lineups. Most SSD makers have their fingers in different controller technologies and memory configurations, and don’t forget the optimization-filled custom firmware that’s layered on top. The end result is a landscape dotted with drives drawn from relatively shallow pools of common ingredients put together in slightly different ways.

We tried to make sense of this new wave of SSD releases as it washed up on shore earlier this year. The thing is, our initial testing was done with firmware that’s now out of date and with 240-256GB drives that live outside of the average enthusiast’s budget. The characteristics of those capacious drives don’t always reflect the performance of the lower-capacity variants that have become affordable indulgences for enthusiasts.

Today, 120-128GB SSDs can be had for around $200, putting them right in the sweet spot typically occupied by our favorite CPUs and graphics cards. Since we don’t yet have a real favorite among the current class of solid-state drives, we’ve spent weeks running nine SSDs through an expanded storage test suite on new Sandy Bridge hardware. Over the following pages, we’ll take a closer look at Corsair’s Performance 3, Force 3, and Force GT; Crucial’s m4; Intel’s 320 and 510 Series; Kingston’s HyperX; and OCZ’s Agility 3 and Vertex 3 SSDs to see which ones deserve a spot in your notebook or desktop PC.

Step up to the SSD silicon buffet

The collection of drives we’ve assembled for testing all hit the market this year, but the underlying controllers that act as middlemen between the NAND flash chips and Serial ATA interfaces actually span multiple years and generations. These controllers play perhaps the biggest role in dictating drive performance, so they’re a good starting point.

In fact, we might as well start at the beginning, which for desktop SSDs is really Intel’s original X25-M. The chip giant’s first stab at a consumer-grade SSD was a remarkably consistent performer at a time when solid-state drives slowed substantially over time and were still prone to crippling bouts of stuttering. Intel designed its own 10-channel controller, and the chip remains remarkably relevant to this discussion because a very similar version of it underpins the 320 Series SSD. Yep, a controller architecture more than three years old still lives on inside Intel’s newest mainstream solid-state drive.

The most obvious hint at the vintage of the 320 Series’ controller silicon is its 3Gbps Serial ATA interface. That tells you a little something about this drive’s chances versus competition equipped exclusively with 6Gbps SATA pipes. The 320 Series is further limited by the controller’s old-school NAND interface, which tops out at 50MB/s per memory chip, and by its overly complicated name, the PC29AS21BA0.

Unfortunately, the controller’s code-name, Postville Refresh, is equally uninspired. Despite retaining the original’s 10 memory channels, this X25-M refresh does add a few new perks, including 128-bit AES encryption and XOR, a NAND-level redundancy scheme that works a little like a RAID 4 array. XOR guards against data loss due to irrecoverable flash failures, and it’s capable of withstanding the death of an entire NAND die while keeping your Windows 7 install and carefully managed Steam folder intact.

Of the nine drives we’ll be looking at today, only the 320 Series uses Intel’s SSD controller. Three of the others, including Intel’s own 510 Series, make use of a Marvell 88SS9174 controller whose roots can be traced back to last year’s Crucial RealSSD C300. This second-generation Marvell design was the first controller to support the 6Gbps Serial ATA standard.

To keep its faster SATA interface well supplied, the 9174 supports the second-gen Open NAND Flash Interface (ONFI) specification. ONFI 2.0 allows flash chips to pass data at speeds in excess of 133MB/s, which is quite an upgrade over the gen-one spec’s 50MB/s ceiling. The 9174 interfaces with those faster flash chips over eight memory channels, providing plenty of performance potential. You’ll have to look elsewhere for extra features like full-disk encryption and RAID-like redundancy schemes, though.

SandForce remains the freshest face in the world of SSD controllers, and it just so happens to have the newest silicon. After storming onto the market last year with the SF-1000 series, SandForce is back with the SF-2000 family, which adds 6Gbps Serial ATA connectivity to an intriguing mix of other features. Chief among those is DuraClass, a black box of technologies that mixes compression and encryption to reduce the NAND footprint of incoming writes. Doing so not only accelerates performance, but also improves endurance by consuming fewer NAND write/erase cycles.

Although SandForce remains cagey about exactly how DuraClass’ DuraWrite compression component works, we do know that it’s tied to the controller’s 256-bit AES encryption engine. RAISE, a NAND-level redundancy scheme similar to Intel’s XOR, is also a part of the DuraClass puzzle. SandForce says RAISE functions much like a RAID 5 array, dedicating the capacity of one flash die to storing pseudo-parity data. Like XOR, RAISE is capable of surviving the failure of an entire flash die without data loss.

The SF-2281’s eight memory channels are compatible with a range of flash configurations. Thus far, SandForce-based drives are largely divided into two categories: those equipped with asynchronous NAND, and those with the synchronous stuff. Synchronous NAND is the DDR of the flash world, capable of transferring data on both the rising and falling edges of a shared clock cycle. Asynchronous chips keep time with an external signal, and they’re slower as a result.

Synchronous NAND can also be found in Crucial’s m4. Intel declined to reveal the NAND interface for its SSDs, but I suspect the 510 Series uses synchronous chips. The Intel 510 drive is faster than the RealSSD C300, which pairs synchronous NAND with an identical Marvell controller. The Intel 320 Series is likely saddled with asynchronous flash, a side effect of its older controller tech.

Although most of the SSDs on the market draw their flash memory from asynchronous and synchronous chips based on the ONFI specification backed by Intel and Micron, a competing synchronous technology known as Toggle DDR NAND is endorsed by Toshiba and Samsung. Toggle DDR is supported by both the Marvell and SandForce controllers, but only one drive in our stack takes advantage. Corsair’s Performance Series 3 has Toshiba NAND chips based on the first-generation Toggle standard, which enables per-chip transfer rates of 133MB/s, a speed that conveniently matches the starting point for the second-gen ONFI spec.

Those Toggle DDR chips are fabbed on a 34-nm process, as is the ONFI NAND lurking inside Intel’s 510 Series. Otherwise, drive makers have largely made the transition to 25-nm NAND. The finer fabrication process packs more gigabytes into each silicon wafer, which is one reason why members of the latest generation of SSDs are cheaper than their forebears.

Nine recipes for solid-state bliss

It’s easy to be overwhelmed by the sheer number of drives we’ve assembled for testing today—there are nine of ’em, after all. To help you get a sense of the pack, here’s a handy chart that lines up the key characteristics of each drive for easy comparison.

Size Controller NAND Cache Warranty Price Corsair Force Series 3 120GB SandForce SF-2281 25-nm Micron async ONFI NA 3 years $166 Corsair Force Series GT 120GB SandForce SF-2281 25-nm Intel sync ONFI NA 3 years $210 Corsair Performance 3 Series 128GB Marvell 88SS9174 34-nm Toshiba Toggle 128MB 3 years $205 Crucial m4 128GB Marvell 88SS9174 25-nm Micron sync ONFI 128MB 3 years $197 Intel 320 Series 120GB Intel PC29AS21BA0 25-nm Intel ONFI 64MB 5 years $215 Intel 510 Series 120GB Marvell 88SS9174 34-nm Intel ONFI 128MB 3 years $279 Kingston HyperX 120GB SandForce SF-2281 25-nm Intel sync ONFI NA 3 years $245 OCZ Agility 3 120GB SandForce SF-2281 25-nm Micron async ONFI NA 3 years $179 OCZ Vertex 3 120GB SandForce SF-2281 25-nm Intel sync ONFI NA 3 years $210

The first thing you might notice is the fact that we have SSDs at two capacity points: 120 and 128GB. The 120GB drives offer 112GB of formatted capacity in Windows, while the 128GB models report 119GB.

Most SSDs reserve a percentage of their total flash capacity as overprovisioned “spare area” dedicated to the controller. This segment of flash isn’t accessible to the operating system, leading to a lower usable capacity than the sum of the NAND chips on a given drive. Overprovisioning isn’t the only element that demands a slice of SSD capacity, either. The hardware-level XOR and RAISE redundancy schemes built into 320 Series and SandForce-powered SSDs require additional flash capacity to store parity data, which is why those drives have slightly lower capacities.

We’ll get into cost-per-gigabyte calculations a little later, but it’s worth keeping in mind that the m4 and Performance 3 offer 7GB more than their counterparts. That’s a substantial chunk of storage when you’re trying to squeeze as many games as possible onto an OS and applications drive.

Take note of the price column over on the right, too. The cost of each drive has been factored into our value graphs, and there’s quite a spread between the $166 Force 3, and the $279 510 Series. The Intel drive is by far the most expensive of the bunch, and the 320 Series is surprisingly pricey given its sluggish 3Gbps interface.

With two more years of warranty coverage than the competition, the 320 Series has at least one unique attribute to help justify its price tag. There’s also an array of capacitors on the circuit board that should keep the drive alive long enough to complete outstanding write operations if the power cuts out. The 510 Series doesn’t enjoy either of those perks, but it does have the unique distinction of being Intel’s first shot at building an SSD with someone else’s controller technology. Intel cooked up its own firmware optimizations for the 510 Series, and they’ll have to be quite effective to account the premium the drive commands over Crucial and Corsair SSDs based on the same Marvell controller.

Corsair’s Performance 3 Series is the elder of the other two Marvell models. Despite being introduced at CES earlier this year, we heard only last week that the drive may not be long for this world. The SSD market moves quickly, and as we move through our performance results, it will become clear why we’re not sad to see the Performance 3 go.

If you’re interested in the Marvell controller, Crucial’s newer m4 SSD is definitely more appealing. The m4 is cheaper, for one, and it’s decked out with double the number of NAND chips (16 in total), all of which are fabbed on a 25-nm process. As the consumer brand of parent company Micron, you can probably guess who supplies the flash and DRAM cache chips for the m4. Crucial is one of only a handful of SSD makers with NAND production capacity of its very own.

Over half of our group of nine SSDs is based on one controller: SandForce’s SF-2281. Corsair and OCZ offer Force 3 and Agility 3 drives that match the controller with asynchronous memory, and both implementations use 16 of the same Micron NAND chips. None of the SandForce-based drives features DRAM cache memory, which isn’t required by the controller.

The Force Series GT, HyperX, and Vertex 3 all use the same combination of the SandForce controller and 16 synchronous NAND chips, making them brothers from different mothers. The synchronous NAND comes from Intel rather than Micron. (Incidentially, the two companies do have a joint flash venture dubbed IM Flash Technologies. It doesn’t supply flash memory for any of the SSDs in this round-up, though.)

While the Vertex 3 wraps everything up in a nondescipt black case, the GT is decked out in an ultra-bright shade of red that wouldn’t look out of place on a Ferrari. The paint job won’t make the drive any faster, but it’s nice to see a little bit of effort being put into making SSDs look and feel as expensive as they are.

If you think the GT looks good, then get a load of Kingston’s HyperX. The drive looks nothing like the firm’s previous SSD efforts, which were comparatively sedate. This is Kingston’s first foray into SandForce territory, and at least as far as aesthetics are concerned, it’s the most attractive SSD on the market. Unfortunately, the case is secured with a set of trick Allen bolts, preventing us from popping it open to peek at the circuit board. Kingston assures me that it hasn’t hidden a JMicron controller under the hood.

If that’s not enough SandForce variety for you, we should note that OCZ has two more SSD models based on the SF-2281. A cheaper Solid 3 drive is positioned below the Agility 3, while a pricier Max IOps variant of the Vertex 3 fills out the other end of the spectrum. OCZ’s middle children seem to be the most popular members of the family, which is why they’re the focus for today.

The inconvenient truth

As attractive as solid-state drives have become in recent years, there remains a rather inconvenient truth: SSDs still suffer from serious issues. Either due to their own actions or by virtue of being associated with another company’s hardware, each and every one of the SSD makers represented in this round-up is dirty. Most of the problems have been firmware-related, and not even Intel, widely viewed as a bastion of reliability, is immune.

Indeed, Intel has been party to two rather embarrassing firmware episodes in recent years. A firmware update intended to enable TRIM on second-generation X25-M drives ended up bricking some of them. More recently, a nasty firmware bug was discovered in the 320 Series that reduced the total capacity to just 8MB, taking all of the drive’s data with it. Those issues were both resolved, but they hardly inspire confidence.

The fact that other SSD makers have had similar issues doesn’t help. Crucial’s RealSSD C300 was prone to slipping into a particularly low-performance state, and the initial firmware update to address the problem ended up killing a number of drives. An updated fix followed, of course, but that didn’t stop me from holding my breath when I applied the latest 0009 firmware update to the Crucial m4.

There’s trouble in SandForce territory, too. Although I haven’t experienced it myself, the widely reported “BSOD bug” is very real—if rare and difficult to reproduce. SandForce’s official statement on the subject shifts a lot of the blame to host drivers and “isolated” hardware configurations, so the root cause is unclear. However, the company has confirmed that it has been testing new firmware that tweaks how drives handle different power states, background operations, and errors. That firmware “appears to be yielding positive results,” so it may not be a silver bullet. We’ll have to revisit the SandForce drives when this new firmware trickles down to end users, which should be in a matter of weeks.

An occasional BSOD seems less severe than losing data during a firmware update or due to spontaneous solid-state suicide, but it’s still unsettling. Drives from all of SandForce’s partners appear to be affected, which casts a cloud over Corsair, Kingston, and OCZ. In fact, Corsair went so far as to recall an early batch of Force 3 SSDs due to not only firmware issues, but also problems with the SSD hardware.

With the exception of Intel, drive makers have been largely loathe to publish reliability statistics about their drives. The stack of SSDs I have sitting in the Benchmarking Sweatshop isn’t nearly tall enough to make up a reasonable sample size, leaving us to seek out other sources for anecdotal reliability data. Newegg’s user reviews may be helpful in this context, although they certainly should be taken with a grain of salt. In the chart below, I’ve compiled a collection of Newegg user scores for each SSD, including the total number of reviews, the average score, and the percentage of one-star reviews.

Reviews Average 1-star Corsair Force Series 3 9 4 22% Corsair Force Series GT 47 4 9% Corsair Performance 3 Series 39 4 15% Crucial m4 98 4 11% Intel 320 Series 75 5 3% Intel 510 Series 134 4 7% Kingston HyperX 12 4 0% OCZ Agility 3 93 3 28% OCZ Vertex 3 244 4 26% WD Caviar Black 1TB 1371 4 14%

The Agility 3 and Vertex 3 have among the highest percentages of one-star reviews, with the Agility 3 averaging only three stars. That makes OCZ look particularly questionable, but keep in mind that it had exclusive early access to SandForce’s new controller. The Agility 3 and Vertex 3 were out long before other SandForce partners got in on the action—with newer firmware revisions. SSDs haven’t been particularly kind to early adopters.

The SandForce-based drives from Corsair and Kingston have much fewer reviews, which seem to be more favorable overall. Kingston’s HyperX is the most recent addition to the SandForce fold, and with only a dozen user reviews, I wouldn’t assume that its lack of one-star ratings makes the drive any less prone to being hit by the BSOD bug.

The 320 Series nicely illustrates why these user reviews aren’t necessarily a good indicator of reliability. Despite a firmware bug capable of compromising user data, the 320 Series has amassed the highest average score and the lowest percentage of one-star ratings from a drive with a decent number of total reviews. With that in mind, it’s hard to know what to make of the largely positive responses to some of the other drives.

To put things into perspective, I’ve also included user review scores for Western Digital’s Caviar Black hard drive. Think mechanical hard drives are free of user complaints? Think again.

I don’t want to draw any conclusions based bugs that have been fixed, issues I haven’t experienced, and user reviews that are too easily tainted by self-selection, corporate astroturfing, and vocal early adopters. Still, it’s clear to me that, although solid-state drives are undoubtedly on the cutting edge of storage technology, they’re definitely still capable of drawing blood. Anyone considering an SSD upgrade should have a solid backup solution in mind, and I’d recommend frequently imaging your OS and applications drive, just in case.

The twins: Reloaded

We’ve concocted an expanded test suite for this mid-range SSD round-up, and The Twins, our matched pair of test systems, have been upgraded to celebrate the occasion. For our purposes, the most important ingredient in this new configuration is Asus’ P8P67 Deluxe motherboard, whose P67 Express chipset ripples built-in 6Gbps Serial ATA connectivity. The P67 may only have two 6Gbps SATA ports, but they’re faster than any other implementation we’ve tested.

The P8P67 Deluxe also has a UEFI firmware interface—by far the best one around—endowing The Twins with native support for hard drives larger than 2.19TB. SSDs may be our focus today, but these systems will spin their share of mechanical platters. In fact, a Western Digital Caviar Black 1TB serves as the system drive for most of our tests, since most tests can only probe drives connected as secondary storage. We’ve also thrown a Caviar Black into our performance testing to provide some context for how the SSDs compare to a 7,200-RPM desktop drive.

A Core i5-2500K sits inside each of our two Deluxe motherboards, and we had intended for the upgrades to end there. However, we encountered some stability issues when burning in the new config, so we set about replacing our older components with fresh hardware.

On the memory front, we’re using Corsair Vengeance kits made up of two 4GB DIMMs. The modules are rated to handle speeds up to 1600MHz, but we’ll only be running them at 1333MHz. All of the test systems in the Benchmarking Sweatshop have used the same Vengeance modules since January, and we haven’t had any problems with them.

The Twins originally used passively cooled graphics cards to keep noise levels to a minimum, but we worried that might not be the most stable long-term config on an open test bench with little ambient airflow. These systems don’t need much in the way of graphics horsepower, so we’ve opted for a pair of Asus EAH6670/DIS/1GD5 graphics cards based on AMD’s Radeon HD 6670 GPU. The cards come with 1GB of RAM and a trio of digital outputs that includes DisplayPort. Their coolers are pretty quiet, too, although the plastic shroud on one of ’em has a tendency to vibrate a little, emitting a low hum that’s audible over the fan noise.

Speaking of noise, I should address CPU cooling, which is currently being handled by a pair of Thermaltake SpinQs from our original storage test rigs. A couple of new coolers are on their way to replace the SpinQs, but they haven’t arrived yet.

A bad PSU was responsible for at least some of the problems we had with our original configuration, so we went all out with an upgrade. Corsair’s new Professional Series PSUs boast 80 Plus Gold certification and modular cables, making them perfect companions our new test systems. The 650W units we’re using have more than enough output capacity and have thus far been very quiet.

Our testing methods

I’m just going to come right out and say it: SSD testing is hard. In the mechanical era, storage was nice and predictable. Today’s solid-state drives are rather more complex, especially since their performance depends not just on the test you’re running, but also on the test you ran before that. And don’t forget about the performance implications of the block-rewrite penalty inherent to flash memory—or the TRIM and garbage-collection routines designed to combat it.

To ensure consistent and repeatable results, the SSDs were secure-erased between almost every component of our test suite. This returns the drives to their factory fresh state, erasing any remnants of previous workloads.

For some benchmarks, we’ve deliberately tested drives in a used state to illustrate their long-term performance potential. Other tests create their own used states, usually by writing across the full extent of the drive before launching into a workload. In all cases, the SSDs were tested in the same states as their peers, ensuring an even playing field for all. I’ve even gone so far as to avoid running certain benchmarks overnight to ensure that some SSDs don’t spend more time than others idling between tests.

All of the SSDs are equipped with their latest firmware, and we’re using fresh Rapid Storage Technology drivers from Intel. We’ve also taken steps to ensure that Sandy Bridge’s power-saving features don’t taint any of our results. All of the CPU’s low-power states have been disabled, effectively pegging the 2500K at 3.3GHz. Transitioning in and out of different power states can affect the performance of storage benchmarks, especially when dealing with short burst transfers.

We run all our tests at least three times and report the median of the results. We’ve found IOMeter performance can fall off after the first couple of runs, so we use five runs in total and throw out the first two. We used the following system configuration for testing:

Processor Intel Core i7-2500K 3.3GHz Motherboard Asus P8P67 Deluxe Bios revision 1850 Platform hub Intel P67 Express Platform drivers INF update 9.2.0.1030 RST 10.6.0.1022 Memory size 8GB (2 DIMMs) Memory type Corsair Vengeance DDR3 SDRAM at 1333MHz Memory timings 9-9-9-24-1T Audio Realtek ALC892 with 2.62 drivers Graphics Asus EAH6670/DIS/1GD5 1GB with Catalyst 11.7 drivers Hard drives Corsair Force Series 3 120GB with 1.3 firmware Corsair Force Series GT 120GB with 1.3 firmware Corsair Performance 3 Series 128GB with 1.1 firmware Crucial m4 128GB with 0009 firmware Intel 320 Series 120GB with 4PC10362 firmware Intel 510 Series 120GB with PPG4 firmware Kingston HyperX 120GB with 320ABBF0 firmware OCZ Agility 3 120GB with 2.11 firmware OCZ Vertex 3 120GB with 2.11 firmware WD Caviar Black 1TB with 05.01D05 firmware Power supply Corsair Professional Series Gold AX650W OS Windows 7 Ultimate x64

Thanks to Asus for providing the systems’ motherboards and graphics cards, Intel for the CPUs, Corsair for the memory and PSUs, Thermaltake for the CPU coolers, and Western Digital for the Caviar Black 1TB system drives.

We used the following versions of our test applications:

The test systems’ Windows desktop was set at 1280×1024 in 32-bit color at a 75Hz screen refresh rate. Vertical refresh sync (vsync) was disabled for all tests.

Most of the tests and methods we employed are publicly available and reproducible. If you have questions about our methods, hit our forums to talk with us about them.

HD Tune — Transfer rates

HD Tune lets us look at transfer rates in a couple of different ways. We use the benchmark’s “full test” setting, which tracks performance across the entire drive and lets us create the fancy line graphs you see below. This test was run with its default 64KB block size.

As the rainbow above clearly illustrates, we’ve done some color coding to make the graphs a little easier to read. In the bar graphs, the SSDs are colored by drive maker. We’ve expanded that selection of colors to cover individual models in the line graphs. Our lone mechanical drive, the Caviar Black, is set apart in grey throughout.

HD Tune’s read speed test splits the contenders into multiple tiers. It’s crowded at the top, where the Crucial m4 offers a slightly higher average speed than the Kingston HyperX and all the other SandForce drives. The Intel 510 Series’ average read rate is more than 45MB/s off the pace, and Corsair’s spin on the same Marvell controller is another 5MB/s behind.

Of course, those 6Gbps drives are still much faster than the 320 Series, which plods along without threatening to saturate its 3Gbps interface. Even though it more than doubles the Caviar’s average write speed, the 320 Series is still left in the dust by its 6Gbps competition.

Although the SSDs exhibit largely consistent read speeds across the extent of their capacity, the same can’t be said for writes on the SandForce drives. As the graph clearly shows, the SandForce controller bounces between high and low extremes more than 200MB/s apart. The 510 Series and Performance 3 also exhibit dips in performance, but they’re not nearly as severe or as frequent.

Despite their seemingly erratic behavior, the SandForce drives do offer higher average read speeds than the rest of the field. The fastest among them, the Vertex 3 and HyperX, have higher minimum speeds than the 510 Series’ average. And the 510 Series has a higher average write rate than everything else outside the SandForce camp.

The 320 Series might look a little more competitive versus the Performance 3, but don’t be fooled. The Intel drive writes only 30MB/s faster than the Caviar Black, which is hardly quick.

HD Tune’s burst speed tests are meant to isolate a drive’s cache memory.

The SandForce drives don’t have any external cache chips, but that doesn’t hold back the synchronous configs, which sit comfortably at the front of the pack with both reads and writes. Interestingly, the asynchronous Agility 3 and Force 3 configs keep pace with reads but are slower with writes.

Most of the SSDs are quicker with burst reads than they are with writes, but none quite so dramatically as the Crucial m4 and Intel 320 Series. Both SSDs have slower burst write speeds than our mechanical hard drive. Interestingly, only the 510 Series and Performance 3 have the same burst speed for both reads and writes.

HD Tune — Random access times

In addition to letting us test transfer rates, HD Tune can measure random access times. We’ve tested with four transfer sizes and presented all the results in a couple of line graphs. We’ve also busted out the 4KB and 1MB transfers sizes into bar graphs that should be easier to read.

The line graph nicely explains why solid-state drives are so attractive versus their mechanical counterparts: comparatively instantaneous access times. The Caviar Black’s access times are measured in two-digit milliseconds, but at least through 64KB transfers, SSD access times are measured in fractions of a millisecond—tiny fractions, in fact.

Including the Caviar Black in the bar charts muddies the waters a little, but look at the access times for the 4KB transfer size. Seven of the nine SSDs are within 0.02 milliseconds of each other. The Performance 3 and 510 Series lag behind a little, but the Crucial m4, which uses the same controller chip, does not.

Random reads slow down quite a bit with the larger 1MB transfer size, but there’s little change in the standings. The Performance 3 and 510 Series still trail the leaders, this time joined by the 320 Series.

Switching to random writes changes the picture a little—well, except for the mechanical drive, which remains entirely uncompetitive. At the 4KB transfer size, all of the SSDs are on pretty even footing. Only 0.03 milliseconds separate the pack, with the SandForce drives out in the lead.

The Agility, Vertex, Force, and HyperX drives maintain that lead at the 1MB transfer size, where the rest of the pack starts to drop off. The Intel 510 Series and Crucial m4 are evenly matched with 1MB writes, and they’re both quicker than the Performance 3 and 320 Series.

TR FileBench — Real-world copy speeds

Our resident developer, Bruno “morphine” Ferreira, has been hard at work on a new file copy benchmark for our storage reviews. FileBench is the result of his efforts. This shining example of scripting awesomeness runs through a series of file copy operations using Windows 7’s xcopy command. Using xcopy produces nearly identical copy speeds to dragging and dropping files using the Windows GUI, so our results should be representative of typical real-world performance.

To reduce the number of external variables, FileBench runs entirely on the drive that’s being tested. Files are copied from source folders to temporary targets that aren’t deleted until all testing is complete. Copy speeds were tested first with the SSDs fresh from a secure erase and a second time in a “tortured” used state after 30 minutes of IOMeter thrashing through a workstation access pattern loaded with 32 concurrent I/O requests.

To gauge performance with different kinds of files, we tested with five sets. The movie set includes six video files of the sort one might download off BitTorrent. Total payload: 4.1GB. 101 uncompressed images from my Canon Rebel T2i make up the RAW file set, totaling 2.32GB. Our MP3 file set uses a chunk of my music archive, which is made up of high-bitrate MP3s and associated album art. This one has 549 files that add up to 3.47GB. The Mozilla file set includes the huge selection of files necessary to compile Firefox. All told, there are 22,696 files spread across only 923MB. Finally, we have the TR file set, which contains several years worth of the images, HTML files, and spreadsheets behind my reviews. This set has the largest number of files at 26,767, but it’s heftier than the Mozilla set with 1.7GB worth of data.

Oh, what a difference file size makes. The Crucial m4 has the highest copy speeds with the movie, MP3, and RAW file sets, but it stumbles way down in the standings with the remaining two, which are made up of much higher numbers of substantially smaller files. Corsair’s Performance 3 and Intel’s 510 Series aren’t far off the pace with the first three file sets, and they don’t suffer as much with the TR and Mozilla files.

All of the SandForce drives clump together at the head of the class with the Mozilla file set, and the asynchronous configs lag behind the synchronous ones by just a few MB/s. That gap grows as file sizes increase, though. The asynchronous Agility 3 and Force 3 are relegated to 320 Series territory with the MP3, RAW, and movie sets.

The SandForce drives exhibit slower copy speeds in our used state than they do when fresh from a secure erase, likely due to a less aggressive approach to reclaiming used flash pages marked as available by TRIM. Surprisingly, the Marvell-based drives are faster in a used state than after having all their flash pages cleared. This is true, to varying degrees, for the m4, Performance 3, and 510 Series with virtually every file set.

TR DriveBench 1.0 — Disk-intensive multitasking

TR DriveBench allows us to record the individual IO requests associated with a Windows session and then play those results back on different drives. We’ve used this app to create a set of multitasking workloads that combine common desktop tasks with disk-intensive background operations like compiling code, copying files, downloading via BitTorrent, transcoding video, and scanning for viruses. You can read more about these workloads and desktop tasks on this page of our SSD value round-up.

The traces that make up this first batch of DriveBench workloads are an imperfect measure of real-world performance because they only account for small snippets of disk activity. A much larger trace on the following page addresses that deficiency, but we’re going to keep around the old ones to give you a comparative point of reference to our reviews of older drives.

Below, you’ll find an overall average followed by scores for each of our individual workloads. The overall score is an average of the mean performance score with each multitasking workload.

DriveBench 1.0 is run right after five rounds of our usual IOMeter access patterns, so all the drives start in a thoroughly used state. This test will only run on an unpartitioned drive, so we delete the IOMeter test file (which spans the entire capacity of each drive) and the accompanying partition before launching DriveBench.

The Crucial m4 upsets an all-SandForce podium by sneaking between the Force GT and Vertex 3. The HyperX isn’t far behind, while the asynchronous Agility 3 and Force 3 are notably slower than their synchronous kin. Surprisingly, the 320 Series outpoints the 510 Series, at least overall.

The two Intel drives trade blows back and forth through our five access patterns, but the 320 Series takes three out of the five. We saw a 250GB version of the 510 Series easily outpace a 320 Series with 300GB under its belt, suggesting Intel’s Marvell-based solution doesn’t translate as gracefully to lower capacity points. That said, the 320 Series has benefited from a post-launch firmware update, while the 510 Series has not.

Crucial’s latest firmware has definitely helped the m4’s performance, allowing the drive to top its SandForce competition in three of five workloads. The split between the synchronous and asynchronous SandForce configs remains, but we don’t see much of a gap between products using the same memory chips.

TR DriveBench 2.0 — More disk-intensive multitasking

As much as we like DriveBench 1.0’s individual workloads, the traces cover only slices of disk activity. Because we fire the recorded I/Os at the disks as fast as possible, the drives also have no downtime during which to engage background garbage collection or other optimization algorithms. DriveBench 2.0 addresses both of those issues with a much larger trace and a slightly different testing methodology.

For our new trace, I recorded about two weeks of disk activity on a test system pressed into service as my primary desktop. The system was left on at all times, and it was used mostly for web surfing, email, photo editing, gaming, and working with the HTML, Excel, and image files that become TR content. To make things more interesting, I fired up disk-intensive multitasking workloads alongside those more mundane desktop tasks. The multitasking workloads were similar to what’s included in DriveBench 1.0: compiling code, copying files, downloading torrents, transcoding video, and scanning for viruses.

Although my bursts of disk-intensive multitasking were contrived in nature, the goal was to come up with a demanding test that would probe drives for weakness in a sea of everyday I/O. The system was limited to a single partition, which housed not only the OS and applications but also all of the associated data and downloaded, ahem, Linux ISOs. We plan to add at least one more DriveBench workload that more strictly models the life of an OS and applications drive, but that’ll have to wait for a future article.

As it stands, our multitasking-infused trace is loaded with more than 25 million read operations totaling over 1.1TB of data. The workload has plenty of writes, too: 14 million that add up to nearly 525GB. That’s a busy couple of weeks.

DriveBench 1.0 all but eliminates disk idling, but this second revision gives the SSDs plenty of idle time for background processing. The test begins with drives fresh from a secure erase, but it writes across their full capacity to ensure that all flash pages are filled before performance is measured. Things are a little different on that front, too. Instead of looking at a raw IOps rate, we’re going to explore service times—the amount of time that it takes drives to complete an I/O request. We’ll start with an overall mean service time before slicing and dicing the results.

The SandForce drives reign supreme, with the synchronous Force GT, Vertex 3, and HyperX configs topping their asynchronous counterparts. Intel lurks just shy of that second pack of SandForce offerings, while the Crucial m4 trails even the 320 Series. Behind it, the Performance 3 shows serious weakness and is nearly as slow as our mechanical hard drive.

Let’s try to make some sense of these numbers with a breakdown of reads and writes.

The standings shift slightly from the overall results when we only consider reads. While the synchronous SandForce drives remain in the lead, they’re now followed by the 510 Series and the m4. Corsair’s Performance 3 Series fares much better here, trailing the 320 Series by a relatively small margin.

Switch to writes, however, and the middle of the pack shuffles completely. The asynchronous SandForce configs tuck in behind their synchronous relatives, while the 510 Series slips behind not only the 320, but also the Caviar Black. Write service times are even slower on the Crucial m4, and the Performance 3 is a complete disaster.

There are millions of I/O requests in this trace, so we can’t easily graph service times to look at the variance. However, our analysis tools do report the standard deviation, which can give us a sense of how much service times vary from the mean.

With reads, the standard deviation results stack up much like the mean service times. One notable exception is the Crucial m4, which falls three places.

The SandForce configs have a low standard deviation with both reads and writes, indicating more consistent performance than the competition. Coupled with lower mean service times, that’s a pretty appealing combo. Once again, the Marvell-based SSDs falter with writes and fall behind our lone mechanical hard drive.

If I haven’t already scared you off with too many graphs and statistics, this next pair will do it. We’re going to close out our DriveBench analysis with a look at the distribution of service times. I’ve split the tally between I/O requests that complete in 0-1 milliseconds, 1-100 ms, and those that take longer than 100 ms to complete.

The top contenders are pretty closely matched with reads. Only the Agility 3, Force 3, and 320 Series drop off the lead group. Our write results are more interesting, and they hint at why the Performance 3 fares so poorly overall: 3.6% of all write request take over 100 milliseconds to complete. That’s a long time within the context of a modern PC. We’re currently looking at other ways to crunch these data to determine whether those slower service times occur in bunches that might cause perceptible stuttering or lag.

Even if they don’t, the percentage of 100+ ms service times on the Performance 3 is several orders of magnitude higher than on most of the other drives—including the Caviar Black. The m4 and 510 Series also have a higher number of 100+ ms requests than the mechanical drive, highlighting a potential weakness of the Marvell controller.

IOMeter

Our IOMeter workloads are made up of randomized access patterns, making them perfect candidates to exploit the wicked-fast access times of solid-state storage. This app bombards drives with an escalating number of concurrent IO requests and should do a good job of simulating the demanding environments common in enterprise applications. We’ve previously tested with the “pseudo random” access pattern, but that re-uses a buffer that is only filled with random data once, which doesn’t strike us as very random at all. For this round of testing, we’ve cut out the recycling with IOMeter’s fully random setting.

This decision mostly impacts the write-compression technology inside SandForce controllers, which will struggle to work its magic on truly random sequences of 1s and 0s. While we’ve observed little drop in performance moving from pseudo to fully random workloads on SandForce drives with server-style overprovisioning percentages, the same can’t be said for current consumer-grade drives that set aside much less of their flash capacity as “spare area” for the controller.

Over the last few years, we’ve watched new storage controller drivers effectively cap IOMeter performance scaling beyond 32 outstanding I/O requests. The Serial ATA spec’s Native Command Queue is 32 slots deep, and more than one drive maker has told us that this queue is rarely full. As a result, we’re only testing up to 32 concurrent I/O requests.

The m4 and 510 Series come out ahead across all four IOMeter access patterns, with the Crucial drive taking the read-dominated web-server test, and the Intel SSD posting higher transaction rates in the others.

Somewhat surprisingly, the 320 Series does pretty well here. It’s right in the thick of things with the second wave of drives, which includes the synchronous SandForce pack. Further adrift lie the asynchronous drives, the Performance 3 and Agility 3. The Agility 3 is definitely the slower of the two with the file server, database, and workstation access patterns, which are the only ones to mix reads and writes.

Injecting writes into the access pattern can have a huge impact on relative performance. Just look at the Performance 3 Series, which manages third place with the read-exclusive web-server access pattern but falls to the back of the SSD field with the other three, all of which include meaty write components.

Boot duration

We’re limited in how we can measure storage performance with actual applications, but we have come up with a handful of load-time tests that do just that. This is the only batch of performance tests that presses the competitors into service as system drives housing the operating system. It’s only fitting, then, that we start by timing how long it takes to load the OS. Here, we’re relying on Windows 7’s own performance-monitoring capabilities to clock the boot duration, which is the time between BIOS initialization and when the system has loaded all processes and idled for 10 seconds. We’re reporting the boot duration minus those superfluous seconds.

Ah yes, another example where the difference between SSDs is much smaller than the gap between the solid-state crowd and a very fast mechanical hard drive. The top five SSDs all load Windows within half a second of each other. Only one of the SandForce drives can be found in that lead group, while the rest stumble in 1-2 seconds off the mark set by the Crucial m4.

Level load times

Upgrading to a fancy solid-state drive will likely have little impact on in-game frame rates. But will you be able to load levels any faster?

Yes, at least when compared to a mechanical hard drive. The SSDs are much quicker than the Caviar Black in both Portal 2 and Duke Nukem Forever. Although you might only boot a system once a day, the average gaming session will consist of numerous level loads.

You’re going to have to make the performance gaps between the SSDs cumulative to notice a difference between the drives. Only about a second separates the slowest examples from the fastest.

Compiling

We’ve long thought that code compiling might be able to tease out meaningful performance differences between different storage solutions, so we’ve taken one more shot at the problem with a little help from FileBench creator Bruno “morphine” Ferreira. This test starts with version 2010.05 of the Qt application framework source, which is compiled with multiple threads using the MinGW port of GCC 4.4.0. Mad props to morphine for packaging this test so nicely.

I’d also blame him for the fact that this test doesn’t show any real advantage for solid-state drives, but that’s a notable result in itself. After two failed attempts, I think we’re going to have to take a break from developing compiling benchmarks for SSD testing. This test does have potential uses in other reviews, however.

Power consumption

We tested power consumption under load with IOMeter’s workstation access pattern chewing through 32 concurrent I/O requests. Idle power consumption was probed one minute after processing Windows 7’s idle tasks on an empty desktop.

Notebook users beware: don’t assume that 2.5″ mobile drives have anywhere close to the power consumption of the Caviar Black. In reality, you’re looking at power consumption in the 1-3W range, making it difficult to argue that adding an SSD to your notebook will dramatically improve battery life. It will, however, allow you to toss that system around with no fear of a head crash tearing through your hard drive’s mechanical platters in a grinding crescendo of catastrophic data loss.

Among our collection of solid-state drives, the Intel 320 Series consumes the least amount of power overall. All of the drives are pretty power-efficient, although the ones based on SandForce silicon do tend to draw more wattage both at idle and under load.

The value perspective

Still with me? Congratulations, you’ve reached our famous value analysis, which adds capacity and pricing to the performance data we’ve explored over the preceding pages. We used Newegg prices to even the playing field for all the drives, and we didn’t take mail-in rebates into account when performing our calculations.

First, we’ll look at the all-important cost per gigabyte, which we’ve obtained using amount of storage capacity accessible to users in Windows.

Obviously, the mechanical drive is in another class here. But ignore that for a moment and count just how many of the SSDs have reached the dollar-per-gigabyte mark… plus some change. On capacity alone, the Force 3 looks like the best deal of the lot, followed closely by the Agility 3, m4, and Performance 3 series.

Our remaining value calculations require a single performance score, which makes things a little complicated. We’ve come up with an overall index that normalizes SSD performance against a common baseline provided by the Caviar Black. This index uses a subset of our performance data, including HD Tune’s random 4K response times and average transfer rates, our used-state FileBench results, scores from all five DriveBench 1.0 workloads, mean DriveBench 2.0 service times plus the percentage above 100 ms, IOMeter transfer rates for each access pattern with eight outstanding I/O requests, the Windows 7 boot duration, and our load times in Portal 2 and Duke Nukem Forever.

Time constraints prevented us from using a slower baseline drive than our Caviar Black, which actually scored better than a few of the SSDs in a couple of DriveBench metrics. To prevent those scores from jacking with the overall results, we’ve fudged the numbers slightly to match our mechanical baseline. Calculating overall performance scores is an imperfect science, and I may have to dust off our old 4,200-RPM notebook drive to set a new baseline for future reviews.

We’ve been using a harmonic mean to generate our overall score for storage performance because it does a good job of handling normalized results that can vary by several orders of magnitude from one test to the next. After much reading on the subject and calculating numerous performance scores in previous storage reviews, we’re convinced this is the best approach for our particular mix of tests.

We have a healthy habit of zeroing our graphs here at TR, but I guess this one could start at 100%, which represents the overall performance of our Caviar Black hard drive. Everything above that mark is gravy, and the synchronous SandForce drives are comfortably in the lead. The Vertex 3, Force GT, and HyperX are clearly superior to their asynchronous counterparts overall.

Although slower than the leaders, the Agility 3 and Force 3 manage to stay ahead of the rest of the field. The Crucial m4 isn’t far behind, relegating the Intel drives and the Performance 3 to the back of the pack.

So, what happens if we mash this overall performance score with cost and capacity? Magic! Or, rather, performance per dollar per gigabyte, which is divides each SSD’s overall score by its cost per gigabyte. We’ll express this value metric as a single score in a line graph before exploring the relationship between performance and cost-per-gigabyte in a scatter plot.

Our three synchronous SandForce configs occupy the upper tier on the performance axis, but the lower price tags attached to the Force GT and Vertex 3 make those drives more attractive than their HyperX counterpart. The Force 3 is considerably cheaper than all of the synchronous SandForce SSDs—and its asynchronous Agility 3 twin. However, it’s also quite a bit slower than the synchronous stuff.

Although this analysis is helpful when evaluating SSDs on their own, what happens when we consider the cost of drives in the context of a complete system? To find out, we’ve divided our overall performance score by the sum of our test system’s components (which total around $800 at Newegg before adding the SSDs).

As usual, the scatter plot gives us more useful information than the bar graph. Here, we see just how small the price differences are between some of the drives. The synchronous SandForce SSDs don’t cost that much more than cheaper alternatives when one takes into account the cost of a complete system.