Back in the ’80s and ’90s, it used to be a seriously noteworthy advance when Intel or IBM or TMSC announced that they’d successfully crossed yet another nanometer threshold and moved their CMOS chip fab process down the micron ladder. In 1985, 1 micron — 1,000nm — was the state of the art, and was used by the Intel 80386 processor. By 2004, the micron scale had been abandoned and 90nm processors like the Winchester AMD 64 and Prescott Pentium 4 were the norm.

Things have slowed down considerably since the heady days of 0.8, 0.6, and 0.35 micron, though. Most current digital devices use processors, sensors, and memory chips based on 45 and 60nm processes because very few silicon foundries — except for Intel — have managed to make the jump to 32nm, let alone 22nm. The fact is, the standard process of arranging components on a silicon wafer using a top-down, layer-by-layer approach, has hit a wall. Even atomic layer deposition, the process that will take us to 22nm, 16-and-14nm, and introduce FinFET “3D” transistors, can go no further.

The thing is, atoms are very, very small, but they still have a finite size. A hydrogen atom, for example, is about 0.1 nanometers, and a caesium atom is around 0.3nm. The atoms used in silicon chip fabrication are around 0.2nm. Now, you would be right in thinking that you can get hundreds of atoms into 22 or 16nm — but that’s not the size of individual transistors; that’s actually a measure of the distance between discrete components on a chip. In the case of the 22nm chips — a process that only Intel has mastered and will come to market with Ivy Bridge — the high-κ dielectric layer is just 0.5nm thick; just two or three atoms!

This is a problem because no manufacturing technique is perfect — and when you’re talking about a single out-of-place atom ruining an entire chip, it is no longer possible to create circuits that are both reliable and cost effective.

How will we scale the 14nm wall? The only real option is changing how chips are made. So much time and money and research has already been plowed into our existing layer-by-layer lithography techniques, so the next few, stopgap years will probably revolve around supplemental technologies like IBM’s “silicon glue” and Invensas’ chip-stacking process, which both lower power consumption and improve performance. Instead of squeezing more transistors onto a wafer, emphasis will then be put on reducing power consumption by controlling subthreshold leakage and building more components into single SoCs.

And who knows what else might be around the corner? Intel has 11nm on its roadmap, so presumably it has a plan to break through the 14nm wall. Perhaps graphene chips are the answer, or photonic or quantum computers? With the shift towards mobile computing, perhaps Moore’s law will make way for Koomey’s law instead? Either way, don’t worry about it too much: if the never-ending persistence of silicon chips has taught us anything it’s that computers will get faster, cheaper, and more efficient.

Speaking to the BBC, Mike Mayberry, director of components research at Intel, had the following insightful snippets to say: “We need to do something different,” he said. “We cannot keep driving down that road without turning the wheel.” When questioned about Moore’s law, he said: “We look down the road and it’s foggy. The nearby stuff is clear and we can see that the big stuff is there but we cannot see the details.”

“The horizon is about 10 years away.”