I am hardly the world's best programmer. I'll be the first to tell you that there are tons of developers out there better than I am. But here's the thing: in the ten years I've been gainfully employed as a so-called professional programmer, I can count the number of truly great programmers I've worked with on one hand. I know this probably sounds like hopeless elitism, but hear me out: there's something unique about our profession that leads to an unusually profound disparity in skill.

In programming specifically, many studies have shown order of magnitude differences in the quality of the programs written, the sizes of the programs written, and the productivity of the programmers. The original study that showed huge variations in individual programming productivity was conducted in the late 1960s by Sackman, Erikson, and Grant (1968). They studied professional programmers with an average of 7 years' experience and found that the ratio of intitial coding time between the best and worst programmers was about 20:1; the ratio of debugging times over 25:1; of program sizes 5:1; and of program execution speed about 10:1. They found no relationship between a programmer's amount of experience and code quality or productivity. (Code Complete, page 548)

In other words, the good developers aregood, and the bad developers are atrociously bad. You really never know what you're going to get when you arrive on a job: statistically, you've got a fifty/fifty chance of working with either a genius or a jackass. Isn't that reassuring?

Wouldn't you expect a truck driver with twenty years of driving experience to perform better than a rookie with less than a year of road time under his belt? Of course you would. And shouldn't a grizzled ten year veteran of dozens of software projects-- like, say, myself-- perform better than some punk kid directly out of college? Well, you might think so, but in the bizarro world of software development, that logic doesn't apply:

[In the analysis of Coding War Games results, 1977 - 1986, we found that] people who had ten years of experience did not outperform those with two years of experience. There was no correlation between experience and performance except that those with less than six months' experience with the languages used in the exercise did not do as well as the rest of the sample. (Peopleware, p. 47)

In a study with similar findings, Bill Curtis presented a group of 60 professional programmers with what he characterized as a "simple" debugging task ("Substantiating Programmer Variability," Proceedings of the IEEE, vol. 69, no. 7, 1981). In spite of its simplicity, 6 of the professional programmers weren't able to complete the task, and data on their performance was excluded from the results of the study. Curtis observed order of magnitude differences among the programmers who were able to complete the task. (Steve McConnell)

What the hell kind of profession generates so much data supporting the hypothesis that there is no correlation between experience, performance, and skill? Where do we go from there? I don't have any answers, but I do have two suggestions.