Some people are obsessive about never using closed-source software under any circumstances. Some other people think that because I’m the person who wrote the foundational theory of open source I ought to be one of those obsessives myself, and become puzzled and hostile when I demur that I’m not a fanatic. Sometimes such people will continue by trying to trap me in nutty false dichotomies (like this guy) and become confused when I refuse to play.

A common failure mode in human reasoning is to become too attached to theory, to the point where we begin ignoring the reality it was intended to describe. The way this manifests in ethical and moral reasoning is that we tend to forget why we make rules – to avoid harmful consequences. Instead, we tend to become fixated on the rules and the language of the rules, and end up fulfilling Santayana’s definition of a fanatic: one who redoubles his efforts after he has forgotten his aim.

When asking the question “When is it wrong (or right) to use closed-source software?”, we should treat it the same way we treat every other ethical question. First, by being very clear about what harmful consequences we wish to avoid; second, by reasoning from the avoidance of harm to a rule that is minimal and restricts peoples’ choices as little as possible.

In the remainder of this essay I will develop a theory of the harm from closed source, then consider what ethical rules that theory implies.

Ethical rules about a problem area don’t arise in a vacuum. When trying to understand and improve them it is useful to start by examining widely shared intuitions about the problem. Let’s begin by examining common intuitions about this one.

No matter how doctrinaire or relaxed about this topic they are, most people agree that closed-source firmware for a microwave oven or an elevator is less troubling than a closed-source desktop operating system. Closed source games are less troubling than closed-source word processors. Any closed-source software used for communications among people raises particular worries that the authors might exploit their privileged relationship to it to snoop or censor.

There are actually some fairly obvious generative patterns behind these intuitions, but in order to discuss them with clarity we need to first consider the categories of harm from closed-source software.

The most fundamental harm we have learned to expect from closed source is that it will be poor engineering – less reliable than open source. I have made the argument that bugs thrive on secrecy at length elsewhere and won’t rehash it here. This harm varies in importance according to the complexity of the software – more complex software is more bug-prone, so the advantage of open source is greater and the harm from closed source more severe. It also varies according to how serious the expected consequences of bugs are; the worse they get, the more valuable open source is. I’ll call this “reliability harm”.

Another harm is that you lose options you would have if you were able to modify the software to suit your own needs, or have someone do that for you. This harm varies in importance according to the expected value of customization; greater in relatively general-purpose software with a large range of potential use cases for modified versions, less in extremely specialized software tightly coupled to a single task and a single deployment. I’ll call this “unhackability harm”.

Yet another harm is that closed-source software puts you in an asymmetrical power relationship with the people who are privileged to see inside it and modify it. They can use this asymmetry to restrict your choices, control your data, and extract rent from you. I’ll call this “agency harm”.

Closed source increases your transition costs to get out of using the software in various ways, making escape from the other harms more difficult. Closed-source word processors using proprietary formats that no other program can fully handle are the classic example of this, but there are many others. I’ll call this “lock-in harm”.

[Update, two days later] A commenter points out another kind of harm from closed source: secrets can be lost, taking capabilities with them. There are magnetic media from the early days of computing – some famous cases include data of great historical interest recorded by the U.S. space program in the 1960s – that are intact but cannot be read because they used secret, proprietary data formats embodied only in hardware and specifications that no longer exist. This typifies an ever-present risk of closed-source software that becomes more severe as software-mediated communication gets more important. I’ll call this “amnesia harm”.

Finally, a particular software product is said to have “positive network externalities” when its value to any individual rises with the number of other people using it. Positive network externalities have consequences like those of lock-in harm; they raise the cost of transitioning out.

With these concepts in hand, let’s look at some real-world cases.

First, firmware for things like elevators and microwave ovens. Low reliability harm, because (a) it’s relatively easy to get right, and (b) the consequences of bugs are not severe – the most likely consequence is that the device just stops dead, rather than (say) hyper-irradiating you or throwing you through the building’s roof. Low unhackability harm – not clear what you’d do with this firmware if you could modify it. Low agency harm; it is highly unlikely that a toaster or an elevator will be used against you, and if it were it would be as part of a sufficiently larger assembly of surveillance and control technologies that simply being able to hack one firmware component wouldn’t help much. No lock-in harm, and no positive externalities. [There is some potential for amnesia harm if the firmware embodies good algorithms or tuning constants that can’t be recovered by reverse-engineering.]

Because it scores relatively low on all these scales of harm, highly specialized device firmware is the least difficult case for tolerating closed source. But as firmware develops more complexity, flexibility, and generality, the harms associated with it increase. So, for example, closed-source firmware in your basement router can mean serious pain – there have been actual cases of it hijacking DNS, injecting ads into your web browsing, and so on.

At the other end of the scale, desktop operating systems score moderate to high on reliability harm (depending on your application mix and the opportunity cost of OS failures). They score high on unhackability harm even if you’re not a programmer, because closed source means you get fixes and updates and new features not when you can invest in them them but only when the vendor thinks it’s time. They score very high on agency harm (consider how much crapware comes bundled with a typical Windows machine) and very high on lock-in [and amnesia] harm (closed proprietary file formats, proprietary video streaming, and other such shackles). They have strong positive externalities, too.

Now let’s talk about phones. Closed-source smartphone operating systems like iOS have the same bundle of harms attached to them that desktop operating systems do, and for all the same reasons. The interesting thing to notice is that dumbphones – even when they have general-purpose processors inside them – are a different case. Dumbphone firmware is more like other kinds of specialized firmware – there’s less value in being able to modify it, and less exposure to agency harm. Dumbphone firmware differs from elevator firmware mainly in that (a) there’s some lock-in [and amnesia] harm (dumbphones jail your contacts list) and (b) in being so much more complex that the reliability harm is actually something of an issue.

Games make another interesting intermediate case. Very low reliability harm – OK, it might be annoying if your client program craps out during a World of Warcraft battle, but it’s not like having your financial records scrambled or your novel manuscript trashed. Moderate unhackability harm; if you bought a game, it’s probably because you wanted to play that game rather than some hypothetical variant of it, but modifying it is at least imaginable and sometimes fun (thus, for example, secondary markets in map levels and skins). No agency harm unless they’re embedding ads. No lock-in harm, [low odds of amnesia harm,] some positive externalities.

Word processors (and all the other kinds of productivity software they’ll stand in for here) raise the stakes nearly to the level of entire operating systems. Moderate to high reliability harm, again depending on your actual use case, High unhackability harm for the same reasons as OSes. Lower agency harm than an OS, if only because your word processor doesn’t normally have an excuse to report your activity or stream ads at you. Very high lock-in [and amnesia] harm. If the overall harm from closed source is less here than for an OS, it’s mainly because productivity programs are a bit less disruptive to replace than an entire OS.

So far I haven’t made any normative claims. Here’s the only one I really need: we should oppose closed-source software, and refuse to use it, in direct proportion to the harms it inflicts.

That sounds simple and obvious, doesn’t it? And yet, there are people who I won’t name but whose initials are R and M and S, who persist in claiming that this position isn’t an ethical stance, is somehow fatally unprincipled. Which is what it looks like when you’ve redoubled your efforts after forgetting your aim.

Really, this squishy “unprincipled” norm describes the actual behavior even of people who talk like fanatics about closed source being evil. Who, even among the hardest core of the “free software” zealots, actually spends any effort trying to abolish closed-source elevator firmware? That doesn’t happen; desktop and smartphone OSes make better targets because they’re more important – and with that pragmatism, we’re right back to comparative evaluation of consequential harm, even if the zealot won’t acknowledge that to himself.

Now that we have this analysis, it leads to conclusions few people will find surprising. That’s a feature, actually; if there were major surprises it would suggest that we had wandered too far away from the intuitions or folk theory we’re trying to clarify. Conclusions: we need to be most opposed to closed-source desktop and smartphone operating systems, because those have the most severe harms and the highest positive-externality stickiness. We can relax about what’s running in elevators and microwave ovens. We need to push for open source in basement routers harder as they become more capable. And the occasional game of Angry Birds or Civilization or World of Warcraft is not in fact a terrible act of hypocrisy.

One interesting question remains. What is the proper ethical response to situations in which there is no open-source alternative?

Let’s take this right to an instructive extreme – heart pacemakers. Suppose you have cardiac arrhythmia; should you refuse a pacemaker because you can’t get one with open-source firmware?

That would be an insane decision. But it’s the exact kind of insanity that moralists become prone to when they treat normative rules as worship objects or laudable fixations, forgetting that these rules are really just devices for the avoidance of harm and pain.

The sane thing to do would be to notice that there are kinds of harm in the world more severe than the harm from closed source, remember that the goal of all your ethical rules is the reduction of harm, and act accordingly.