Every six months, a team of supercomputing academics compiles a list of the most powerful computers on the planet. It's called the Top500 list, and it has become a competition of sorts. National labs vie against universities, military facilities, NASA, and even temporary cloud-based supercomputers—all to see who's building the worlds' largest number-crunching machines.

This year, the machine on the top of the list is Tihane-2, a Chinese system that can perform 33.86 quadrillion calculations per second. But here's the thing. Tihane-2 was on top back in November of 2013, and a year ago too. In fact, when you look at the top 10 machines on the June list, there's only one new entry–an unidentified Cray supercomputer, operated by the U.S. government. It's ranked tenth.

For Jack Dongarra, a computer science professor with the Unversity of Tennessee who has long been involved with the list, we're starting to see a trend. "Things seem to be slowing down," he says. "You might characterize it as maybe a sign that Moore's Law is having some issues." Moore's Law predicts that the number of transistors on a computer microprocessor will double every two years or so, providing regular leaps in computing power.

Dongarra sees early signs of stagnation not only at the top of the list, but also at the bottom of the list. Traditionally, the system that places last on the list has been significantly faster than its predecessor. But over the past few years, that growth rate has slowed–as you can see in this chart:

Top500 List

"We see a widening of the gap between #1 and #500," Dongarra says. In other words, there are still some wondrous giant computers being made, but they're not popping up as frequently, and the smaller systems at the bottom aren't catching up as fast, either.

For more than a decade, Moore's law has treated the supercomputer community pretty well, handily delivering astounding performance improvements year after year, but eking out those improvements has been a little harder of late. Transistors are shrinking down to the atomic level, and chip designers have had to rejigger the way they boost performance–delivering multi-core chips and looking at novel architectures.

A few years ago, hybrid systems that used graphical processing units as calculation accelerators were all the rage, but the "uptake has not been as great as the hype has been," Dongarra says. Without a big performance win to be had right now, researchers may be holding on to their existing systems just a little bit longer before they build the next great thing.

Over at Intel—the chipmaking giant that supplies the processors for the vast majority of supercomputers on the list—they see no slowdown as they map out the next few generations of microprocessors. "We expect Moore’s Law to continue to provide benefits for the foreseeable future across a broad range of computing segments," said Bill Calder, a company spokesman via an email message. And indeed there may be other forces at work here. In the U.S., funding for these behemoths has been scaled back, and that may also be a contributing factor to the stagnation on the list.

But if the Top500 List really is saying something about Moore's Law, then the rest of the world should take notice. We've grown accustomed to chip-makers cranking out faster and cheaper computing power, year after year. If that engine is sputtering, we'll all soon be feeling the drag.