HighPoint’s SSD Caching, Explored

Everyone is waiting for Intel’s next LGA 1155 chipset announcement, which we previewed in Intel Z68 Express Chipset Preview: SSD Caching And Quick Sync. Z68 is expected to fix shortcomings of the existing H67/P67 Express models and add SSD caching capabilities to the LGA 1155 platform (Ed.: here's a hint: keep your eyes open for more in the next 24 hours). As of right now, you can either build an overclocked system on P67 Express or use the integrated graphics engine on H67, giving up processor overclocking entirely. Z68 is expected to rectify that major marketing-driven oversight and support more flexible combinations of the integrated graphics unit, external graphics, and overclocking.

Regardless of these expected additions, Intel is aiming at maximizing storage performance by utilizing hard disks and a solid-state drive, which can be utilized as a high-speed cache. While we expect this feature to be an attractive option, it will be limited to the Z68 Express chipset. Consequently, if you've ignored our recommendations to "wait for Z68" and already made the jump into a Sandy Bridge-based platform, you'll be forced to buy a new motherboard in order to get access to SSD caching.

Fortunately, that's not your only option. HighPoint recently launched a PCI Express-based storage controller that performs the same task at a fairly modest price.

HighPoint Technologies says it specializes in providing cost-effective storage solutions for mainstream markets. Therefore, it competes with the chipset companies. It also contends with entry-level solutions from vendors like Adaptec/PMC Sierra, Areca, and LSI. While Adaptec and LSI create their own chip designs, Areca and HighPoint base their products on controllers from third-party vendors.

The product we’re looking at today is called the RocketHybrid 1220, and it's a x1 PCI Express add-in card with two internal SATA 6Gb/s ports based on Marvell’s 88SE9130 controller. HighPoint also sells a variant of the card that employs eSATA instead.

Cache Me If You Can

The concept of caching is fundamental in almost every PC component category, especially when you're looking at subsystems that work with large amounts of data. The idea is to improve performance by providing or prefetching information through a faster form of storage than the underlying device. Imagine a ladder or a triangle. At the base you have the slowest (and yet cheapest) form of storage. Ascending the hierarchy gets you performance, but at an increasing price. Throwing an SSD in between your hard drive and system memory is intended to help overcome the performance-oriented limitations of magnetic storage, while at the same time keeping costs manageable by specifically using lower-capacity solid-state devices.

An example: every processor has cache, typically organized in two or three stages referred to as levels. The purpose of these processor-based caches is to make hot data available as close as possible to the execution cores in an effort to minimize access to the slower system memory. Hard drives usually work with a physical DRAM cache memory of 8-64 MB to reduce activity on the physical drive. Optical drives use a memory buffer for caching write operations. Windows caches hard drive data or application data into available main memory. Even applications like Adobe Photoshop sometimes have their own caching implementation. At the end of the day, Windows’ indexing feature is another type of caching.

There have been various attempts to accelerate hard drive storage using different types of caching. Hybrid HDDs failed. However, there are a few product options available now that noticeably speed up system performance.