At CES in January 2019, Nvidia’s chief executive, Jensen Huang, said what most of us in the tech business had already considered and accepted: Moore’s Law, which predicts regular increases in the computing power of silicon chips, is dead.

Today, the smallest commercially produced chips have feature sizes that are a minuscule 7 nm. As transistors get closer to atomic scale, it’s getting harder to shrink them further. Many believe that today’s most advanced transistor design, the FinFET, can’t get below 5 nm without a major rethink—and that even 5 nm may be prohibitively expensive. That means, in turn, that it’s harder to double the density of transistors on a silicon chip every 24 months, as Moore’s Law predicts.

The death of Moore’s Law has major implications, as a slowdown in performance improvements could hit some computing applications hard. Horst Simon, Deputy Director of the Lawrence Berkeley National Laboratory, helps rank the 500 most powerful supercomputers in the world twice a year. He notes that while year-over-year increases remain significant—annual performance growth hovers at around 1.6x per year—there has been a marked reduction from the 1990’s and early 2000’s when annual improvements regularly exceeded 2x.

“At the high end, we’ve seen a slowdown,” Simon said. That slowdown could impact disciplines from astrophysics to climatology, which rely on supercomputer simulations for research.

Despite the potentially epoch-changing feeling that a lack of continuing compliance with Moore’s Law may evoke, there are many ways to improve performance without squeezing more transistors onto each chip (or waiting decades for the development of exotic solutions like spintronics). It just takes a little creativity.

One promising tactic is further specialization. Over the past 20 years, we’ve seen the widespread adoption of GPUs to handle calculations for 3D computer graphics and other multi-threaded tasks as they can handle some of these workloads far faster than general-purpose CPUs. Today, companies are starting to broaden the application of specialized chips—also called accelerators—to other areas including machine learning, security and cryptocurrency.

Valeria Bertacco, Director of the applications driving architectures research center at the University of Michigan, says we “are just seeing the beginning” of this trend.

Today, we have five or six types of specialized processors, but in five to 10 years that may turn into 100, she says. This finer grain of specialization brings the potential to develop processors that can handle applications that are difficult for today’s hardware, such as the complicated graph-based computations needed to analyze social networks.

“Scaling silicon is not the only way to get lower power and better performance,” she said. “It was just the easiest way until 10 years ago.” A decade from now, people shopping for a high-end computer may pay less attention to its overall processing power in GHz and more to how many specialized processors it includes.

Partner Content: Compression Characteristics of Thermal Interface Gap Filler Materials