By: Michael Feldman

The long-predicted demise of Moore’s Law appears to be playing out. Over the last couple of years, Intel and other chipmakers have struggled to keep their semiconductor technology plans on schedule, paving the way for fundamental changes in the computer industry.

To be clear, this is not the end of transistor shrinkage. That should proceed at a reduced pace for the next several years. But the traditional 18-month to two-year cycle of doubling transistor density is over. Intel appears to be planning to deploy its fourth processor generation on essentially the same 14nm process technology it introduced in 2014.

A landmark article on the death of Moore’s Law published last year in MIT Technology Review, noted that Intel’s 10 nm process technology deployment was moved from 2016 to late 2017. Although officially that’s still the plan, rumors of problems with defects and yields may delay the rollout even further. And Intel’s 7nm process node won’t be available until 2021 or 2022. So much for two-year cycles

Meanwhile other chipmakers appear to be catching up. Qualcomm, for example, is planning to use its 10nm process for its upcoming Snapdragon 835 SoC, while GlobalFoundries is looking to skip the 10nm node and go directly to 7nm in 2018. TSMC has its 12nm process technology in the pipeline and will fabricate NVIDIA’s upcoming Volta GPU on this node. TSMC has also been tapped to support volume production of Qualcomm’s Centriq 2400 ARM server chip in its 10nm process.

Intel is still considered ahead of the competition at this point, since its rivals measure their nanometers quite differently. In general, 10nm technology from TSMC, Qualcomm, and GlobalFoundries corresponds to Intel’s 14nm node. Nevertheless, the pure-play fab companies are moving aggressively, and it’s possible the Intel will lose its process manufacturing advantage in the not-too-distant future, maybe as early as 2018.

The bigger picture is that no manufacturer is likely to put transistor shrinkage back on a two-year cadence. The difficulties of maintaining reasonable yield is exacerbated at each new node. And the fabs to manufacture those transistors at smaller and smaller geometries become increasingly expensive. Intel’s recent announcement of its future $7 billion fab for 7nm chips gives you a sense of the stakes involved. Thus, the economics of creating leading-edge chips demands either greater volumes or higher margins, and at some point those numbers might not add up.

The industry will adapt. In fact, it already has, and this has been especially true in the server space. The move to integrated system-on-chip (SoC) designs is already standard for server processors. In essence, it shrinks the motherboard components onto the die, enabling greater efficiencies in data movement and power dissipation. Integration gives up some flexibility, but it’s one of the principle ways of to doing more computation with less transistors.

Likewise, there’s a concerted move to throughput processors in the datacenter. GPUs, FPGAs, and Xeon Phi processors offer a degree of specialization for those applications that can take advantage of high levels of data parallelism, namely HPC and machine learning. FPGAs also are able to support certain datacenter workloads -- networking, storage, cryptography, security -- a good deal more efficiently than standard CPUs. These more specialized platforms can offer an order of magnitude better performance than a scalar processor on the appropriate workloads.

Further afield are quantum computing processor and neuromorphic chip. Neither is particularly dependent on Moore’s Law, since they represent a radical departure from conventional computing. In the case of quantum computing, the challenge is to string together enough qubits to perform useful computations beyond the scope of digital computers. The best guess is that a processor with 30 to 60 qubits will accomplish that. Google, IBM, Microsoft and Intel all have research programs underway, and IBM recently declared it would have commercial 50-qubit systems in “a few years.”

Neuromorphic computing has also been much in vogue recently, with efforts by Intel, Microsoft, Qualcomm, and IBM, as well as an array of academic research efforts, making steady progress. The idea here is to ditch the von Neumann style of computing altogether in favor or a brain-like architecture that implements neurons and synapses in hardware. The promise here is that these processors can operate at at a fraction of the power of a conventional chip, but can simulate some of the computational properties of a brain. Their suitability for AI applications is fairly obvious.

The industry’s reliance on Moore’s Law for the last 50 years is coming to a close. Fortunately, the technologies discussed here will keep computers advancing for the foreseeable future. Maybe not all of them will pan out, but some certainly will. The transition to these technologies is likely to be a little rough, both for the vendors and the users, but the demand for computing will continue unbated. It’s an exciting time to be in the industry.

Image credits: @digitalblasphemy