If you've got even a passing interest in the subject, you're undoubtedly aware that true progress in general-purpose x86 multicore programming has been slow and uncertain. Intel and AMD may have made the technology affordable—a quad-core system could easily have cost thousands of dollars just five years ago, compared to the low hundreds today—but software development has lagged well behind the pace with which we've seen new multicore chips.

A new report from Gartner suggests we're fast approaching a time when top-end servers simply won't be able to use all of the additional cores they're being handed. Recent comments at the Multicore Expo echo the business analysis firm's claim—expect multicore processor growth to zoom well beyond what the market needs. Gartner analyst Carl Claunch doesn't pull any punches.

"Looking at the specifications for these software products, it is clear that many will be challenged to support the hardware configurations...accelerating in the future," said Carl Claunch, vice president and distinguished analyst at Gartner. "The impact is akin to putting a Ferrari engine in a go-cart; the power may be there, but design mismatches severely limit the ability to exploit it."

I agree with Mr. Claunch and Gartner's central thesis insomuch as I agree that a 32-socket server built on 32-core processors would be a monstrosity; most businesses would be hard-pressed to take full advantage of that much power in a single server. With all due respect to Mr. Claunch and Gartner, however, I think the firm has completely misread the shape of the long-term semiconductor market.

With dual-cores slipping towards the value market segment and quad-core nearly ubiquitous across the upper ends of the market, it's easy to see why Gartner takes the position that it does. Quad-cores have become (relatively) cheap and affordable, and there's Intel up on stage, talking about octal-core Nehalem-EX (codenamed Beckton).

This does not, however, mean that it's business as usual across any of Intel's product lines. For the first time in the company's history, it's leading with its smallest, weakest (computationally speaking), and lowest-power parts—and customers can't get enough of them.

Intel isn't going to break with the current zeitgeist, particularly given today's economic realities. Yes, servers and massive HPC clusters will still be built at the very highest end of the market, and sure, we'll see socket counts and total core installations rise. It does not, however, follow that by the time Intel is pushing some hypothetical 32 core system, we consumers will be humming happily along on our "mere" 16-core chips. One option, as flabbergasting as it may seem, would be to simply build smaller chips and use the available die space for something different altogether. We've barely begun to explore the concept of pairing x86 processors with other specialized chips to ensure low power operation and best performance; there's no reason to assume that all of these—or even a majority—would be standard x86 chips.

As far as truly "multithreaded" applications are concerned, we aren't just attempting to reinvent the wheel—we're trying to find out which tools we need to even try to make a wheel. We're going to make progress, but progress takes time—years, in this case. While we're waiting, there's a fascinating brawl developing in the netbook market. If VIA manages a launch and AMD lives up to expectations, Atom's near-lock over the netbook segment could vanish in a heartbeat.