I was a historian before I was an activist, and I’ve been reminded recently that a lot of younger hackers have a simplified and somewhat mythologized view of how our culture evolved, one which tends to back-project today’s conditions onto the past.

In particular, many of us never knew – or are in the process of forgetting – how dependent we used to be on proprietary software. I think by failing to remember that past we are risking that we will misunderstand the present and mispredict the future, so I’m going to do what I can to set the record straight.

Some blurriness about how things were back then is understandable; it can sometimes take a bit of effort even for those of us who were there in elder days to remember what it was like before PCs, before the Internet, before pixel-addressable color displays, before ubiquitous version-control systems. And there were so few of us back then – when I first found the Jargon File around 1978 you could fit every hacker in the U.S. in a medium-sized auditorium, and if you were willing to pack the aisles probably every hacker in the world.

A larger and subtler change, the one easiest to forget, is how dependent we were on proprietary technology and closed-source software in those days. Today’s hacker culture is very strongly identified with open-source development by both insiders and outsiders (and, of course, I bear some of the responsibility for that). But it wasn’t always like that. Before the rise of Linux and the *BSD systems around 1990 we were tied to a lot of software we usually didn’t have the source code for.

Part of the reason many of us tend to forget this is mythmaking by the Free Software Foundation. They would have it that there was a lost Eden of free software sharing that was crushed by commercialization in the late 1970s and early 1980s. This narrative projects Richard Stallman’s history at the MIT AI Lab on the rest of the world. But, almost everywhere else, it wasn’t like that either.

One of the few other places it was almost like that was early Unix development from 1976-1984. They really did have something recognizably like today’s open-source culture, though much smaller in scale and with communications links that were very slow and expensive by today’s standards. I was there during the end of that beginning, the last few years before AT&T’s failed attempt to lock down and commercialize Unix in 1984.

But the truth is, before the early to mid-1980s, the technological and cultural base to support anything like what we now call “open source” largely didn’t exist at all outside of those two settings. The reason is brutally simple: software wasn’t portable!

You couldn’t do what you can do today, which is write a program in C or Perl or Ruby or Python with the confident expectation that it will run on multiple architectures. My first second full-time job writing code, in 1980, was representative for the time: writing communications software on a TRS-80 in Z-80 assembler. Assembler, people!. We wrote a lot of it. Until the early 1980s, programming in high-level languages was the exception rather than the rule. In general, you couldn’t port that stuff!

Not only was portability across architectures a near-impossible dream, you often couldn’t port between instances of the same machine without serious effort. Especially on larger machines, code tended to be intertwined with details of individual site configuration to an extent that would shock people today (IBM JCL was notoriously the worst offender, but by no means the only).

In that kind of environment, arguing about whether code should be redistributable in general was next to pointless, because unless the new machine was specifically designed to be binary-compatible with the old, ports amounted to being re-implementations anyway.

This is why the earliest social experiments in what we would now call “open source” – at SHARE and DECUS – were restricted to individual vendors’ product lines and (often) to individual machine types. And it’s why the cancellation of the PDP-10 follow-on in 1983 was such a disaster for the MIT AI Lab and SAIL and other early hacker groups. There they were, stuck, having folded huge amounts of time and genius into a huge pile of 10 assembler code and no real possibility that it would ever be useful again. And this was normal.

The Unix guys showed us the way out, by (a) inventing the first non-assembler language really suitable for systems programming, and (b) proving it by writing an operating system in it. But they did something even more fundamental — they created the modern idea of software systems that are cleanly layered and built from replaceable parts, and of re-targetable development tools.

Tellingly, Richard Stallman had to co-opt Unix technology in order to realize his vision for the Free Software Foundation. The MIT AI Lab itself never found its way to that new world. There’s a reason the Emacs text editor is the only software artifact of that culture that survives to us, and it had to be rewritten from the ground up on the way. (Correction: A symbolic-math package called MACSYMA also survives, though in relative obscurity.)

Without the Unix-spawned framework of concepts and technologies, having source code simply didn’t help very much. This is hard for younger hackers to realize, because they have no experience of the software world before retargetable compilers and code portability became relatively common. It’s hard for a lot of older hackers to remember because we mostly cut our teeth on Unix environments that were a few crucial years ahead of the curve.

But we shouldn’t forget. One very good reason is that believing a myth of the fall obscures the remarkable rise that we actually accomplished, bootstrapping ourselves up through a series of technological and social inventions to where open source on everyone’s desk and in everyone’s phone and ubiquitous in the Internet infrastructure is now taken for granted.

We didn’t get here because we failed in our duty to protect a prelapsarian software commons, but because we succeeded in creating one. That is worth remembering.