Introduction

In April of 1965, Electronics magazine published an article by Intel co-founder Gordon Moore. The article and the predictions that it made have since become the stuff of legend, and like most legends it has gone through a number of changes in the telling and retelling. The press seized on the article's argument that semiconductor technology would usher in a new era of electronic integration, and they distilled it into a maxim that has taken on multiple forms over the years. Regardless of the form that the maxim takes, though, it is always given the same name: Moore's Law.

Moore's Law is so perennially protean because its eponymous formulator never quite gave it a precise formulation. Rather, using prose, graphs, and a cartoon Moore wove together a collection of observations and insights in order to outline a cluster of trends that would change the way we live and work. In the main, Moore was right, and many of his specific predictions have come true over the years. The press, on the other hand, has met with mixed results in its attempts to sort out exactly what Moore said and, more importantly, what he meant. The present article represents my humble attempt to bring some order to the chaos of almost four decades of reporting and misreporting on an unbelievably complex industrial/social/psychological phenomenon.

Because this article is quite lengthy, I've divided it into three parts. I've also provided links and summaries for each part below so that you can skip to the part that interests you most:

Part I: The origins of Moore's Law

What was Moore's original formulation? It wasn't about increasing "computing power," and there was a bit more to it than just shrinking feature sizes. Exploring what Moore originally said will give us the opportunity to learn about the major factors that shape semiconductor manufacturing, and that ultimately shape what we can do with computers and much of modern life. Finally, I'll look at how Moore's observation morphed into the present media construction of "Moore's Law" as a statement about performance.

Part II: The effects of Moore's Law

In this section, I'll look at the kinds of possibilities for computing advancement that Moore's Law opens up. Power consumption, flexibility, and a host of other issues come into play when we start looking at the variety of ways to exploit the ever increasing levels of integration that Moore's Law affords us. In the end, we'll see why Moore's Law is just as responsible for "smaller, cheaper and more efficient" as it is for "bigger, faster and more power hungry."

Part III: The future of Moore's Law

In the third and final part, we'll look at some of the challenges currently facing designers who would make use of increasing transistor densities to keep Moore's cost/integration curves marching downwards. In some markets, system architects are arguing that more integration isn't always better, and in other markets they're finding it increasingly difficult to mix all the different types of circuits that they'd like to include on a single die.

Part I: The origins of Moore's Law

The way that "Moore's Law" is usually cited by those in the know is something along the lines of: "the number of transistors that can be fit onto a square inch of silicon doubles every 12 months." The part of Moore's original 1965 paper that's usually cited in support of this formulation is the following graph:

This graph does indeed show transistor densities doubling every 12 months, so the formulation above is accurate. However, it doesn't quite do justice to the full scope of the picture that Moore painted in his brief, uncannily prescient paper. This is because Moore's paper dealt with more than just shrinking transistor sizes. Moore was ultimately interested in shrinking transistor costs, and in the effects that cheap, ubiquitous computing power would have on the way we live and work. This section of the present article aims to give you a general understanding of the various trends and factors that Moore wove together to predict the rise of the personal computer, the mobile phone, the digital wristwatch, and other innovations that we now take for granted. Of course, I should note that Moore's original paper was only four pages in length, while the present article is much longer. This is because Moore presumed quite a bit more background knowledge about the semiconductor industry than most non-specialists have. Thus this article aims to give you enough background to understand Moore's reasoning.

If you read through Moore's paper, the closest you'll come to a quote that resembles "Moore's Law" is the italicized portion of the following section, subtitled "Costs and curves."

Reduced cost is one of the big attractions of integrated electronics, and the cost advantage continues to increase as the technology evolves toward the production of larger and larger circuit functions on a single semiconductor substrate. For simple circuits, the cost per component is nearly inversely proportional to the number of components, the result of the equivalent piece of semiconductor in the equivalent package containing more components. But as components are added, decreased yields more than compensate for the increased complexity, tending to raise the cost per component. Thus there is a minimum cost at any given time in the evolution of the technology. At present, it is reached when 50 components are used per circuit. But the minimum is rising rapidly while the entire cost curve is falling (see graph below). If we look ahead five years, a plot of costs suggests that the minimum cost per component might be expected in circuits with about 1,000 components per circuit (providing such circuit functions can be produced in moderate quantities.) In 1970, the manufacturing cost per component can be expected to be only a tenth of the present cost. The complexity for minimum component costs has increased at a rate of roughly a factor of two per year (see graph on next page) [emphasis mine]. Certainly over the short term this rate can be expected to continue, if not to increase. Over the longer term, the rate of increase is a bit more uncertain, although there is no reason to believe it will not remain nearly constant for at least 10 years.

What exactly does Moore mean by "the complexity for minimum component costs"? And what is the relationship between manufacturing defects, costs and the level of integration? The answers to these two questions are a bit complicated, but I'll do my best to break them down in a reasonably understandable manner.

One good place to begin an explanation of the italicized phrase is by rewriting it in a way that unpacks it a bit:

"The number of transistors per chip that yields the minimum cost per transistor has increased at a rate of roughly a factor of two per year."

This way of putting it is a little better, but the sentence is still impossible to parse correctly if you don't understand the multiple factors that influence the relationship between the number of transistors that you can put on a chip and the cost per individual chip. The following section is aimed at giving you an appreciation of those factors, so that you can better understand Moore's original insight.

This article was first published on February 20, 2003.