This is the end of the golden age for those of us who like to buy computers.

It’s all Apple’s fault. A significant force at the start of the personal computing revolution, Apple is now instrumental in its destruction. Thanks to the path beaten by the iPhone and iPad, the computer industry now understands how to make phones and tablets that do most of the jobs for which individuals used to buy a computer. This development of ‘just barely good enough’ tablets and phones has meant that demand for new PCs is falling drastically.

This has some negative consequences. Those of us who need (or merely want) a proper computer are going to find that it looks increasingly less like a commodity, and more like an exotic piece of specialist kit. In other words: PCs will get much more expensive. (Falling demand leading to higher prices may seem to contradict some conventional economic ideas, but this is an industry that relies heavily on economies of scale.) This will come as a nasty shock after several decades of decreasing prices and increasing performance.

And just in case it’s not obvious, ‘PC’ includes Macs here. Around a decade has passed since Apple last created Macs with truly original technical designs. The economic logic of building around commodity PC components was inescapable. In 2006 I bought two new laptops, one Mac and one PC, and out of curiosity, I did a little investigation. I enumerated all of the internal components making up each system. The Mac looked much fancier on the outside, but its innards turned out to be very nearly identical to my Dell’s. Apple instead specialised in areas where they could still charge a premium: physical design, the usability of their software, and things that aren’t PCs. Their profits illustrate the wisdom of this strategy.

The fact that Macs use commodity PC components (albeit mounted on Apple-designed circuit boards in beautiful boxes) means that the collapse of the PC market will raise Mac prices too.

Price hikes for personal computers of all kinds are inevitable because it just won’t be economically viable for parts suppliers to produce the necessary components at the scale that has historically enabled them to become so cheap.

It’s Happening Now

If you’re wondering how soon this will be upon us, be aware that Intel has recently decided not to complete a massive new chip factory despite having done most of the building work. Weak demand means that even though they’ve put a lot of money into this site, it’s just not worth the remaining investment it would take to bring the plant online.

If you were thinking of buying or building a new computer in the next couple of years, that factory may well be where its most important components would have been built. But not now.

It’s Not Happening All at Once

In some ways, the change will seem gradual. For one thing, PC components have a surprisingly long life, because today’s high-end parts often become tomorrow’s cheap bits. Designs remain on the market in various guises for much longer than you might guess from the inflated rhetoric about how fast things move in the computing industry. But if you build your own systems, you will probably see it sooner rather than later.

Some things will remain cheap for a good while. Anything needed in either a phone or a tablet will continue to be worth manufacturing in high volume. For example, graphics processors just capable enough to drive a full-size tablet panel with retina resolution will be a mass market component for a long time yet. And there’s not going to be any shortage of cheap ARM-based system-on-a-chip components either. This suggests that laptops will be affordable for a good while, because there will be a cheap way to make them: bolt a keyboard to a tablet. (Although quite what will happen to Intel’s venerable processor architecture in this new world remains to be seen.)

The downside is that over time, it will become very expensive, or even impossible to buy a laptop whose performance is much better than a tablet. And given the average tablet’s performance, that’s not an encouraging prospect.

Server components will probably be less affected. The increasing move of computing power out of the hands of users and into the cloud means that the anti-PC trend won’t have an adverse effect in the world of the server any time soon.

The first problems will hit those of us who like desktop systems, not least because the unique merits of desktop computers are not all that widely understood. If you’re in the know you can, for now, get a substantially better system if you don’t mind it being tied to a desk and cumbersome to move around. The desktop’s advantages can be particularly acute if you build them yourself.

I built a new desktop about a year ago. It replaced a system I built about 4.5 years earlier, which still outperforms a lot of newer computers I’ve had the misfortune to use. If you build your own system, you can build it to last—by choosing carefully, you can create something which, with the odd minor upgrade, will be serviceable for years. But you do that by picking components that are fairly close to the bleeding edge. And that’s where we’re going to see costs rise first.

To be more accurate, what we will probably see is costs failing to come down. There has always been a hefty premium for the very best components at any time, so I tend to pick CPUs and motherboards that are a notch down from the top of the range. This saves hundreds of dollars, with only a very slight reduction in capability. I’m expecting, sadly, that this early adopter premium will become significantly stickier. Soon, saving a couple of hundred will drop you not just a few months behind the curve, but an entire year or more.

At Least Computers Stopped Getting Faster

About the only positive aspect of all this is that we no longer need to upgrade quite so often. For all that futurologists love to talk about Moore’s law, the fact is that performance improvements have very nearly ground to a halt, compared to 15 years ago. Yes, Moore’s law is still in play, but that doesn’t mean much in practice: CPU transistor counts double every couple of years, but this brings relatively small performance improvements.

Moore’s law was never the main force behind the exponential performance improvements we used to enjoy. The two phenomena were merely correlated for years. In practice the improvements came almost entirely from exponential increases in transistor switching speeds. The correlation was down to a common root cause: shrinking transistors. Each new generation of silicon fabrication technology made transistors smaller. This enabled more to fit on a chip (fulfilling Moore’s law) but it also had the happy side effect of letting them work faster. But round about 2003/2004, for various reasons these size reductions stopped producing speed increases. They still enabled Moore’s law to hold, but this has simply demonstrated that Moore’s law wasn’t the main thing delivering ever faster computers. (To be fair, Moore’s law came in handy back in the good times: it enabled us to build in the caches that helped to exploit the speed improvements. Without Moore’s law it would have been harder to make practical use of the increased processing power. So you need both phenomena to drive up performance; Moore’s law on its own doesn’t do it.)

Strictly speaking, Moore’s law is providing more power, but not in a form that provides much of a speed boost in practice, for most applications. The power comes in multi-core CPUs, offering parallel processing capabilities of the kind that was once the preserve of exotic, specialized systems. But a quick glance at the 8 CPU load meters on my computer’s task manager when it’s busy shows me that the vast majority of tasks have no use for this parallelism. Only certain specialized tasks can do anything useful with this particular kind of power, something we’ve known since 1967. Amdahl’s law predicts this problem, and most computing tasks look more like the line at the bottom of the first graph in that page than the one at the top.

Apparently the press hasn’t noticed that the exponential performance improvements which are supposed to be ushering in a ‘singularity’ petered out about a decade ago. Even within the computing industry, an embarrassing number of people have also failed to spot this fundamental fact.

All of this may not exactly sound like a positive—it means computer upgrades are no longer as exciting as they used to be, which in turn is a significant factor behind the collapse in demand that’s causing the whole problem in the first place. We’re learning to make do with less, and the smartphone and tablet revolution is part of that. But it’s also why it’s now entirely possible to make a desktop PC that will be useful for 5 years. (And of course, I realise that by taking advantage of this, I’m part of the problem.)

The extended useful life of a computer means component failure is now a major factor in the need to replace equipment. Two decades ago, computer components may as well have been immortal, because (except for hard drives) they usually became obsolete years before they stopped working. And this brings me onto the question behind this post: should I start stockpiling computer equipment?

The Last Affordable Motherboard

We have a vicious circle. Slowing performance improvements (and, soon, rising prices) will push down demand. This reduces opportunities for economies of scale, so prices will rise. Manufacturers will have less incentive to spend billions pursuing ever more elusive performance gains, so improvements will not only become more expensive, they will be increasingly slight. Add ‘good enough’ tablets and phones to the mix, and all you’re left with is increasingly infrequent computer purchases by software developers and other geeks like me. By that time, computers will be as expensive as any other highly specialized technical equipment.

You know that fully-loaded Mac Pro (the new one that looks like evil R2D2 or a high-tech dustbin) that seems so absurdly expensive? Get used to it—5-digit price tags are the future (although the industrial design is probably all downhill from here).

This suggests that there’s going to come a point where it makes sense to buy enough motherboards and other basic components to last a lifetime before the prices get too high. If the only thing I had to worry about was age-related failures, I could reasonably afford to buy all the motherboards and assorted components I’ll ever need. (I’m assuming here that failure is connected to hours of use rather than total elapsed age, and that a component that has sat on the shelf for 20 years is as good as new. Most electronic failures tend to be physical in nature, caused either by heat cycles or, in the case of moving parts—fans and non-SSD drives—plain old wear and tear. And the relatively short lifetime of SSDs is also directly related to how much you use them, rather than how much time has passed. So excellent shelf life seems like a reasonable assumption, but I can’t prove it of course.)

The only problem with this is that components are still getting better—progress has slowed but it hasn’t stopped entirely. If I had bought 10 motherboards 5 years ago, I’d be annoyed, because things have moved on enough that I really don’t want a computer built around something of that era. But the useful lifetime does keep going up, so sooner or later I’ll buy a brand new motherboard that’s not very much worse than the very best one I’ll ever be able to buy.

I don’t think we’re there yet, and I’m not quite sure when that time will come. I’d like to believe the time to start stockpiling is a good 10 years away. But then in 2003 I naively thought that we probably had a good 10 years of exponential clock-speed-driven performance increase yet to come, when in fact it was already pretty much game over. So I suspect that 10 years is optimistic.

The hour is later than you think.