The same "not all silicon is produced equal" statement is applicable to all silicon-based products, not just CPUs. RAM, for instance, is binned-out and built to certain specs to best match the available silicon and target spec. This is even applicable to SSDs: The ASIC ships at a predefined frequency that provides the best stable performance/endurance split as defined by the controller manufacturer's validation testing, but could theoretically be overclocked in a similar capacity to CPU and RAM overclocking. That's what we're here to talk about (and what the video below walks through).

Not all silicon is produced equally. AMD, NVIDIA, and Intel all define a baseline stable frequency that results in the highest die yield at the desired voltage and power spec, then ship all their components at those settings. This is primarily for business (yield) reasons, but also ensures the consumer is more likely to receive a product with a long lifespan and stable operating functionality; higher die yield per wafer means greater product availability, increased profit per wafer, and reduced operating costs.

Overclocking has become an integral component of system building enthusiasm: With a couple throws of switches in BIOS, a CPU can yield significantly higher performance than its stock settings. As we discussed in our overclocking primer , individual CPUs don't ship pre-clocked at their highest possible stable setting for a number of reasons - primarily silicon frequency / voltage tolerance variability.

Intel was using a new prototype SSD (not its new 525 Series SSD) for its SSD overclocking technology and ran on-site SSD OC benchmarks. The drive doesn't yet have a name and has no public codename. For sake of not repeating "The Prototype SSD" over and over, we'll hereforth refer to it as OPSSD -- Overclockable Prototype Solid-State Drive (or 'overpowered,' if you prefer the gaming world's use of OP).

Intel is the first company to publicly make moves toward consumer-overclockable SSDs; in our video interview with them, SSD Marketing Specialist Justin Whitney walks us through a "how to overclock your SSD" lab, talks about the technology, and covers the prototype basics. It was emphasized heavily that SSD overclocking is a prototype technology (and, in fact, was being demoed for the first time at PAX on prototype drives), so Intel needs your input before they even decide if they're going to market. Check out the video below, then leave a comment and let us know what you think of SSD overclocking and if it's something you'd be interested in doing.

We'll also briefly talk about Intel's new microcontroller, as featured in OPSSD.

Intel SSD Overclocking Benchmark - How to Overclock Your SSD's Controller & NAND

Before diving deeper into this, I want to make it clear that this is prototype technology and may not make it to market if Intel doesn't see enough interest. That stated, the new microcontroller (which will be discussed further down) will probably see use in next-gen SSDs, regardless of SSD overclocking's marketability.

In the above video, Justin Whitney takes us through the objectives of SSD overclocking, the advantages yielded by pushing the frequency higher, potential shortcomings, and then gives us a hands-on lab of the process. ASIC/NAND channel bus overclocking is done on the software layer within the OS by using Intel's Extreme Tuning Utility (XTU). In its current form, XTU's SSD tuning section can tweak the controller frequency and the NAND bus frequency. Additional options are a future possibility.

The NAND bus frequency dictates the speed at which the controller (via the bus) communicates with the NAND Flash modules on the device. As some of you may know, a block diagram-like overview of SSD anatomy would show us the device's controller, Flash modules (the actual storage components), and the channel connecting the controller and the NAND Flash. Controllers have a designated number of channels to communicate with the Flash modules (and is actually part of why we see fewer 64GB drives lately - it becomes inefficient to communicate with so few modules); the bus itself—just like any other bus, such as the legacy FSB—pushes data at a constant, defined frequency. Under its stock settings, OPSSD's NAND bus frequency ships at 83MHz, but can be upped to 100MHz with a button.

As for the controller, we're given free reign from 400MHz (stock) up to 625MHz (max stable OC). For purposes of the PAX demo, Intel limited XTU's OC capabilities to known-good (stable) settings on OPSSD (in a similar fashion to the limited / non-K IB CPUs). This brings up an interesting question, though: If 625MHz is a known-good clock on the majority of the dice used in the product, why not ship at that frequency natively?

There are a lot of reasons for this -- some technical, some endurance-related, and some marketing-related. The silicon used in the device has been internally tested for its most stable setting, and while nothing is final, the stock 400MHz seems to be what Intel has decided on as its demonstration-ready frequency. Increasing microcontroller frequency introduces volatility, endangering the stability of the device (as with a CPU); even if most of a shipping device can tolerate an additional 200MHz OC, that doesn't mean all of them can. Increasing frequency could also potentially threaten the P/E endurance of the NAND, potentially degrading usable lifespan as the OC increases. Intel doesn't yet have formal validation data on endurance degradation (if it's even noticeable) from overclocking, given the device's true prototype status.

This noted, SSD endurance already generally exceeds the usable life of the system, so there'd have to be a near-linear performance degradation to really become relevant for the average consumer.

When testing for PAX, Intel techs ran the OPSSDs through a R/W endurance test using Anvil Benchmark (our favorite suite at GN). Whitney noted that the test drives averaged 90TB written (programmed and erased to the full device) before testing was interrupted due to time. Because the prototype drives only needed to operate for 4 full days (PAX weekend), 90TB was more than enough to ensure the devices would survive their first public hands-on. Again, formal validation has not yet been performed.

As for marketing reasons for OC technology, well, those are obvious. Giving consumers a feeling of control (and giving enthusiasts more knobs to play with) is always an appealing factor in a high-end PC component. 4K random IOPS improvements hovered in the 18% (write) and 22% (read) ranges between the OC and non-OC'd OPSSDs, so these efforts certainly aren't just for packaging buzz words; no, they have legitimate gains in performance.

Assuming Intel decides to bring the OPSSD / XTU combination to market, it could cause a bit of a shake-up in the back-end of warranty departments as other manufacturers move to compete with their own overclocking utilities. Intel hasn't yet decided how it would handle warranties for OC'd drives, but we'd speculate it'd be pretty similar to what you're used to seeing on K-SKU CPUs: Big disclaimers, data redundancy suggestions, and then voided warranties once exiting a predefined stable range. Whitney noted that if the company were to ship the product, they'd like to consider more freedom for enthusiast overclockers (read: exiting stable frequencies), but only if they could bundle it with an unlock utility. In order to protect the drive, SSDs have a freeze function that effectively locks the drive from use when it becomes volatile. The drive can be unlocked, but requires dedicated utilities and will likely suffer data corruption.

The team is also considering backup utilities as bundle options (like Acronis), supplying users a means to backup their data prior to toying with overclocking. This speaks to the team's dedication to help protect the end user from data loss, which is always a concern when overclocking any component. As component frequencies increase, the system's communication channels could start exhibiting anomalous behavior and instability resulting from exiting spec.

"K-SKU" equivalent SSDs have been discussed by the team, but aren't within the current scope of market research.

Looking at Intel's New Prototype SSD Flash Storage Processor (Controller)

All the overclocking excitement aside momentarily -- whether or not that makes it to market -- Intel did show us their new controller tech in the OPSSD. The new controller (model unspecified) exhibits a native 10% average performance hike over that of Intel's 520 series SSD, and that's before the 18-22% random improvement granted from OCing the ASIC. At this point we start running into bus saturation (SATA bandwidth cap), so endurance and stability start taking priority over further performance improvements.

Consumer-class SSD controllers have historically either done incredibly well with compressible or incompressible data, but very rarely have the devices exhibited consistently high performance with both data compression types. Generally, in real-world consumer applications, high compressible performance is preferable given its prevalence in everyday computing. Most data consumed by gamers will be compressible (including the OS and its background random processes), though incompressible data makes appearances with certain screencap applications and media render / computation-intensive processes. For gamers who also produce consumable media content, like YouTube videos or certain types of incompressible video streams, high incompressible performance becomes more desirable.

Intel says that its new SSD controller takes a "why not both?" approach to incompressible and compressible performance. From the very limited test we ran on the floor at PAX, it seems they may be on the right path to achieving this objective. We haven't conducted formal benchmarking yet, so cannot fully validate any claims at this time, but I can say that it's a promising piece of tech.

Our Thoughts & Your Thoughts

Whitney very directly asked for our readers' thoughts on SSD overclocking, its usefulness to you, and what you'd like to see in Intel's XTU overclocking utility (or if you even care about SSD overclocking to begin with). Drop a comment below and let us/them know what you think thus far.

As for what our team thought? GN Director of Photography Christopher Greene pointed out that Intel could potentially take the inverse approach to SSD overclocking, adding an optional underclocking function to improve endurance and stability. This might not be immediately useful in most gaming applications, but makes for an interesting point on the enterprise front when dealing with pressure-sealed systems or engineering robotics that require long life.

Personally, I liked the idea of pushing the device to the max and brushing right up against the SATA bus limitation for maximum 4K performance; I'd also be interested to see how dual-SSDs in RAID0 interact when overclocked, and how largely that impacts fault tolerance when striping drives.

SSD overclocking on the whole adds a new layer - albeit a (currently) thin one - to system enthusiast toys, and that's always a positive thing. New knobs and switches and new ladders to top with the highest stable benchmark? I'm definitely looking forward to seeing where we go with this.

If nothing else, it's a cool marketing feature that gives enthusiasts more toys, which is always a good thing for our system building audience. I'd like to see unlocked SSD OCing and additional settings / toggles, even at the risk of bricking drives, if only to freshen up the OC game.

If you're curious to learn more about SSD terminology and compressible/incompressible performance and data entropy, check out this previous video with LSI. Be sure to tweet @IntelGaming or @GamersNexus for further information.

Writing & Video Editing: Steve "Lelldorianx" Burke.

Photography & Film: Christopher Greene.