For a moment, pretend that Intel’s 6-series chipset bug doesn’t exist. Turn the clocks back a couple of weeks and bask in the afterglow that followed the launch of Intel’s Sandy Bridge CPUs. This long-anticipated architectural refresh brought improved performance, lower power consumption, and surprisingly competent integrated graphics to a swath of mid-range processors. Indeed, we were so impressed that we handed out Editor’s Choice awards for several Sandy Bridge models—something we haven’t done for several generations of new CPUs.

Regardless of the fact that a single transistor can sink the long-term 3Gbps Serial ATA performance of the associated 6-series chipsets, Sandy Bridge remains the bomb. The 6-series chipset bug is just that—a problem with the chipset that only reflects poorly on the processor because both must be present for a system to run. Motherboards based on a new chipset stepping are due in a couple of months, and all indications suggest that Intel’s latest CPUs will be just as attractive then as they were just a couple of weeks back.

Enthusiasts contemplating a Sandy Bridge build are best off with one of Intel’s K-series CPUs: either the Core i5-2500K or the Core i7-2600K. The former offers four cores with a 3.3GHz base clock speed and a 3.7GHz Turbo peak, while the latter kicks those clocks up by 100MHz and throws in Hyper-Threading for good measure. By far the most important feature of these K-series models is a set of unlocked multipliers that facilitates easy overclocking.

Standard Sandy Bridge CPUs can only increase their core multipliers by four ticks above the default, putting a hard cap on overclocking headroom. More traditional overclocking methods that rely on increasing the base clock speed without touching the multiplier haven’t worked terribly well with Sandy Bridge because most of the CPU’s components key off that base clock. That’s made the K-series parts a must-have for enthusiasts looking to squeeze as much love as possible from their Sandy Bridge rigs.

In addition to allowing core speeds to be tweaked with little effort, the K series’ unlocked multipliers also make it easy to take advantage of faster memory. Standard Sandy Bridge processors may default to a 1333MHz memory clock, but select DDR3 modules are capable of running at much higher speeds. In some cases, you won’t pay much of a premium. Name-brand DDR3-1600 kits start at around $45 for 4GB, which isn’t much more than the cost of equivalent DDR3-1333 sticks. For roughly twice that amount (and very close to what slower DDR3 memory cost only a year ago), you can get your hands on exotic modules rated for operation up to 2133MHz.

Curious to see whether fancy DIMMs are worth the premium, we’ve taken the time to explore Sandy Bridge performance with a range of different memory configurations. Read on to see how memory clock speeds and latencies impact Intel’s latest processor architecture.

Test notes and methods

If you haven’t done so already, I strongly suggest reading our initial coverage of Intel’s Sandy Bridge CPUs. That review puts the performance of Intel’s new hotness in context against a wide range of contemporary competitors, while this article will focus on the impact of memory speed on the Core i7-2600K. To explore that arena, we’re going to need some fancy DIMMs.

Kingston handed us a 4GB kit of its HyperX DDR3-2133 KH2133C9AD3X2K2/4GX memory at CES earlier this year, so we popped it into a Sandy Bridge system and went to town. With low-profile heatsinks and a stately gray aesthetic, the HyperX modules look surprisingly understated for premium memory. Don’t let the reserved exterior fool you, though. Beneath those heatsinks lies an array of DDR3 memory chips rated for operation at frequencies up to 2133MHz. At that speed, you’re looking at timings of 9-11-9-27, which is a little higher than the 9-9-9-24 latencies typical of DDR3-1333 modules.

Frequency and latency combine to influence memory performance, so we’ve tested a number of different combinations. The first is a standard setup with DDR3-1333 at the 9-9-9-24 timings common among inexpensive desktop modules. To see how more aggressive latency settings change the picture, we’ve run another set of tests at 1333MHz but with tighter 7-7-7-20 timings.

We’ll also look at how frequency comes into play with a set of results for the memory running at 1600MHz with 9-9-9-24 timings and at 2133MHz with 9-11-9-27 timings. Although we couldn’t get the system stable at that top memory speed with the 9-9-9-24 timings used for two of the other configs, higher latency settings generally come hand-in-hand with high-frequency modules. An aggressive 1T command rate proved stable with all the configurations, so we used it across the board.

With few exceptions, all tests were run at least three times, and we reported the median of the scores produced.

Processor Intel Core i7-2600K 3.4GHz Motherboard Asus P8P67 PRO Bios revision 1204 Platform hub Intel P67 Express Chipset drivers Chipset: 9.2.0.1019 RST: 10.1 Memory size 4GB (2 DIMMs) Memory type Kingston HyperX KHX2133C9AD32X2K2/4GB Memory speed 1333MHz 1333MHz 1600MHz 2133MHz Memory timings 7-7-7-20-1T 9-9-9-24-1T 9-9-9-24-1T 9-11-9-27-1T Audio Realtek ALC892 with 2.55 drivers Graphics Asus EAH5870 1GB with Catalyst 11.1 drivers Hard drive Raptor WD1500ADFD 150GB Power Supply PC Power & Cooling Silencer 750W OS Microsoft Windows 7 Ultimate x64

We’d like to thank Asus, Intel, PC Power & Cooling, and Western Digital for helping to outfit our test rigs with some of the finest hardware available.

We used the following versions of our test applications:

The test systems’ Windows desktop was set at 1280×1024 in 32-bit color at a 60Hz screen refresh rate. Vertical refresh sync (vsync) was disabled for all tests.

All the tests and methods we employed are publicly available and reproducible. If you have questions about our methods, hit our forums to talk with us about them.

Memory performance

The most logical place to begin our journey is with a look at memory subsystem performance. These results will quantify the speed of our system’s memory before we dive into application and gaming tests to determine where the extra oomph matters.

Running DIMMs at a higher frequency boosts memory bandwidth—shocking, I know. Stream measures a nice increase in bandwidth going from 1333 to 1600 and 2133MHz. The rise in bandwidth between 1333 and 1600MHz is nearly linear, and our 2133MHz config doesn’t lose too much ground on account of its looser timings. Jumping from 1333 to 2133MHz is good for more than a 50% increase in Stream memory bandwidth.

Memory frequency matters quite a bit more than latency in this test, as our two 1333MHz results make plainly clear. Tightening timings from 9-9-9-24 to 7-7-7-20 only increases memory bandwidth by a few percent.

In a specific test of memory access latency, tighter timings produce a more substantial gain. Frequency still reigns supreme, though. Our 1600MHz config is a few nanoseconds quicker than the best we managed at 1333MHz. As one might expect, access latencies are even faster when the DIMMs are cranked up to their top speed.

Application performance

Before we dip into common desktop applications, let’s drag out something from the always exciting field of scientific computing. We’ve always found the Euler3d computational fluid dynamics test to be particularly responsive to improvements in memory subsystem performance, but does that trend hold with Sandy Bridge?

In a word, yes. Memory frequency is still the biggest determining factor, but latency also plays a big role. Migrating from 1333 to 1600MHz with the same timings yields a half-point increase in the Euler3d score. The much bigger jump from 1600 to 2133MHz produces the same magnitude of a performance increase, suggesting that the 2133MHz config’s looser timings are holding it back. We also see a nice little boost in performance moving the 1333MHz setup to tighter timings.

Now, onto some more common desktop applications, starting with the SunSpider JavaScript browser benchmark.

Just ten milliseconds separate our four configs. The low-latency DDR3-1333 config fares the best here, while the 2133MHz scores the worst. Those results suggest that tighter timings are more important than a higher frequency, but the scores are really too close to call.

Scores remain close in 7-Zip. We have nearly a dead heat in the decompression test, and the compression results show some favor for higher memory frequencies. We’re not seeing anything close to the gaps observed in our memory subsystem tests, though.

The x264 video encoding benchmark doesn’t do much with the extra bandwidth provided by our faster memory configs. Instead, it does a little. Raising the memory frequency and tightening timings both improve performance by small margins. However, splurging on fancy DIMMs isn’t going speed your encoding times dramatically.

It’s not going to do anything for file encryption performance, either—at least not with TrueCrypt.

Our Cinebench scores suggest that the Core i7-2600K isn’t bound by memory speed when crunching single- or multithreaded rendering workloads.

Gaming

Games are arguably the most demanding applications that enthusiasts run on their PCs on a regular basis. To find out whether faster memory affects in-game frame rates, we collected a handful of titles and ran them through two sets of tests. The first batch was conducted at a modest resolution and with low in-game detail settings to remove the graphics card as a potential bottleneck. For the latter, we pushed the resolution to 1920×1080 and cranked the detail levels as high as we could while maintaining playable frame rates.

We tapped each game’s built-in benchmarking component to test its performance. All four titles were run in DirectX 11 mode, even when using low detail settings. For Civilization V, we used the full render score, which should be the most representative of real-world performance. That score has been converted to frames per second to make the graphs easier to understand.

At low resolutions and detail levels, we’re not seeing much of a case for faster memory. A higher memory frequency buys a few frames per second here and there, but that’s pretty much the extent of it. Our low-latency DDR3-1333 config doesn’t really separate itself from the pack, either.

With the exception of competitive Counter-Strike players trying to purge any potential for performance hiccups—real or imagined—most folks use the highest resolution and detail levels they can when playing games. That tends to make one’s graphics card the bottleneck, which is why we see even less separation with this round of tests. At best, the difference between our fastest and slowest memory configs amounts to a few FPS.