Rob Landley recently wrote a widely linked piece on the BSD/System V split (via). There are any number of parts of his piece that I disagree with, but today I want to talk specifically about the early history of gcc, or specifically why people were enthused about it in the late 1980s. The origin of gcc is pretty straightforward; Richard Stallman knew he needed a free C compiler to enable his goal of a completely free Unix, so he wrote one fairly early in the FSF's life. But that's not the reason most people became enthused about it after it got released; after all, at that point pretty much everyone already had a C compiler.

What they didn't have was a good compiler. As hard as it may be to believe now (in a world where everyone knows that a good part of your benchmark performance comes from the compiler), back in the late 1980s the CISC-based Unix vendors didn't really see their C compiler as a competitive advantage. By and large most people simply ported or used the basic Portable C Compiler that had originally come from Research Unix, maybe added some optimizations, and then called it a day. If it could compile Unix correctly and didn't run terribly badly, it was pretty much good enough for DEC (for Ultrix), Sun, SGI, HP, and so on. Devoting more than a bit of engineering resources to the C compiler was apparently too much of a hard sell. And of course if you were running real BSD Unix on your Vax you didn't even have the benefit of any compiler improvements DEC had made.

(This is less crazy than it may seem today. At the time probably the major locus of performance oriented computing was in Fortran code; the vendors did tend to have good Fortran compilers, which they generally wanted extra money for. And remember that C was a relatively new language in the 1980s.)

Then along came gcc. Gcc was rather more modern than common Unix C compilers and even in its early state it often generated clearly better (ie faster) code on Vaxes and M68K machines. In the academic environments I hung around in, it soon became common knowledge (or at least folklore) that recompiling programs like the X server with gcc would get you visible performance improvements. Since the machines back then were not exactly fast, ' CC=gcc ' started appearing in more and more Makefile s, configuration systems started preferring it when it was found, and so on. That you could basically get a performance improvement for free just by installing gcc made all sorts of people quite interested and enthused about it.

(The other attractive thing about gcc for garden variety people was that it often supported a more modern dialect of C than the normal Unix compiler. On top of that, a newer and thus better version of gcc was often just a ftp fetch and rebuild away; you didn't have to wait for an entire OS update.)

This gcc advantage was only an advantage on the common CISC architectures. RISC based Unix systems had much better compilers from the get go and if I remember right gcc version 1 actually had architectural limitations that meant it couldn't really do the advanced optimizations that vendor compilers did. Thus the irony of people switching to gcc when Sun yanked the free compilers from later versions of their OS was that they were going backwards in performance. And yes, we knew this full well. But free beat non-free for many people, especially at universities with limited budgets.

(People who used MIPS based systems from MIPS and then SGI were in luck, because I believe that SGI basically always bundled the generally excellent MIPSco compilers (perhaps only for universities). We didn't use gcc much on either our MIPS Ultrix machines or on the SGI machines that succeeded them.)

Update: There's more good stuff in this Hacker News comment and comment thread.