When Gordon Moore, then at Fairchild Semiconductor, was asked in 1965 to theorize about the future of the newly developed integrated circuit, he had one in his lab with a then-amazing 64 transistors on it — double the 32 that was state of the art only a year earlier. Connecting those dots on a graph with the single component planar transistor invented in 1959, Moore noticed that the number of components was roughly doubling every year. In an article he contributed to a special issue of Electronics magazine published that spring, he speculated that it could continue to do so for at least a decade. It wasn’t until that decade had passed, and Moore’s friend Carver Mead noticed that the trend had held up, that the term Moore’s Law was coined.

As we look forward to the future of Moore’s Law after its amazing 50-year run — 50 years officially as of April 19th — it is helpful to look back at how it came to be, and how much it has evolved to fit a changing industry already. That provides a basis for speculating on what will happen to the pace of computing innovation going forward.

1965: Gordon Moore’s very-educated guess

Moore’s prediction was the result of combining two very important observations he made in the process of writing his original article. First, that at any given time there was an optimal number of components to put on a chip. More components meant a lower cost per component, except that as the number of components increased yield decreased, so at some point there were diminishing returns to cramming more components on a chip. He graphed the tradeoff between complexity and yield in the chart below, with an extrapolation out to 1970.

Second, he realized that the optimal number of components on a chip was increasing rapidly — it had doubled every year since the first planar transistor had been created in 1959. That produced an exponential curve, which he graphed in the chart below. He extended the line of historic data into the future, predicting that the doubling could continue for at least ten years into the future. While Moore had been inspired to think about the rapid progress in the miniaturization of components by hearing Douglas Engelbart speak on the subject, Moore was the first to plot the points on paper and make a specific prediction about how it would progress. Moore never thought of his prediction as a law, or even anything related to underlying physical principles. But he did explain in the article in some detail how he thought each possible technical problem that needed to be solved over the next decade could be successfully addressed.

1975: Carver Mead immortalizes an already-modified Moore’s Law

By the time Carver Mead coined the term Moore’s Law around 1975, Moore himself had already modified it. Even though Moore never expected his projections to be very precise, it had nearly perfectly predicted semiconductor progress for a decade. However, Moore felt that gains in component density would begin to taper off, and suggested that by 1980 a doubling every two years was a more likely prospect.

Intel’s House reshapes Moore’s Law into its current form

While component density gains were slowing by 1975, Intel’s Dave House observed that individual components were themselves getting faster. He theorized that this meant computing power on a chip could double about every 18 months — slower than Moore’s original 1965 prediction, but faster than the 1975 revision. This is the form of the Law that has become popular and has been carefully — almost slavishly — tracked and relied on by the semiconductor industry.

If you work in the semiconductor industry, the specifics of Moore’s Law are very important to you. There are some great reasons to question whether progress in integrated circuit technology can continue on the pace it has. My colleague, Joel Hruska, will have plenty to say on that in another article. For many of us, though, the primary impact of Moore’s Law has been an increasing abundance of computing power at a reduced cost — we don’t really care how the industry makes it happen. So it is worth considering those innovations in the larger context of computing before, and perhaps after, the integrated circuit.

From the abacus to the supercomputer

Despite the focus given to the computing revolution brought on by the invention of the transistor and the integrated circuit, computers existed long before anyone thought of using silicon to create them. A stroll through the Computer History Museum’s chronologically arranged exhibits starts with the abacus, which in turn gives way to the slide rule, mechanical calculators dating back to Babbage, and then decades of increasingly powerful mainframes that relied on vacuum tubes. Before integrated circuits, discrete transistors even made early supercomputers like the Atlas and the 3 MFLOP CDC 6600 possible.

If we look at the progress of computing in the 30 years before Moore wrote his article, we can chart the gains in processing power from the 1 cycle per second of Konrad Zuse’s 1938 Z1 mechanical computer — arguably the first true programmable model with a modern architecture — to the 3MFLOPs of the 1965 CDC 6600. Even if we charitably grant the Z1 1FLOP, the gain corresponds to the doubling of compute power every 12 to 18 months during that time — similar to the rate projected by Moore for integrated circuits, but across several different physical implementations. In his book on The Singularity, Ray Kurzweil goes even further back, compiling data since 1900 and the mechanical tabulator. If we graph that data on a log scale, we can see that we’ve been making exponential progress for over a century:

After the integrated circuit

The modern integrated circuit is running into all sorts of limits in size and power that may spell the end of the strictly defined version of Moore’s law. But we have plenty of new technologies waiting in the wings to pick up the pieces, in the same way the integrated circuit took over from transistors, and transistors did from vacuum tubes. Perhaps the most obvious is massively parallel computing, best typified today by the modern GPU. It has given us massive increases in performance not just for graphics, but for more and more applications that are being rewritten to take advantage of large number of processing cores. Beyond that lies the weird world of quantum computing, which is slowly starting to take practical shape. Or perhaps new kinds of physical computer architectures, like those using light or perhaps graphene.

Many children are familiar with the fable about the mathematician who asked the king for a simple doubling of a grain of rice on each square of a chessboard, and how it ran the king out of rice. In the same way, we are in the fortunate position in computing that despite successive technologies running out of steam, innovators always seem to come up with “the next big thing” in the nick of time to keep our amazing progress going. When interviewed on the subject, Moore himself reflected both that semiconductor technology couldn’t keep up its rapid progress, and that other technologies like nanotechnology and graphene might step up to fill the need.

[Moore’s charts as reprinted in Understanding Moore’s Law. Transistor count chart from Wikimedia. Kurzweil data from The Singularity is Near, page 70]