This post is based off of “Freedom, Security and Privacy” a keynote I gave at OpenWest 2018. You can see the full video of the talk here.

Freedom, security and privacy are interrelated. The relationship between these three concepts is more obvious in some cases than others, though. For instance, most people would recognize that privacy is an important part of freedom. In fact, studies have shown that being under surveillance changes your behavior such as one study that demonstrates that knowing you are under surveillance silences dissenting views. The link between privacy and security is also pretty strong, since often you rely on security (encryption, locked doors) to protect your privacy.

The link between freedom and security may be less obvious than the others. This is because security often relies on secrecy. You wouldn’t publish your password, safe combination or debit card PIN for the world to see, after all. Some people take the idea that security sometimes relies on secrecy to mean that secrecy automatically makes things more secure. They then extend that logic to hardware and software: if secret things are more secure, and proprietary hardware and software are secret, therefore proprietary hardware and software must be more secure than a free alternative.

The reality is that freedom, security and privacy are not just interrelated, they are interdependent. In this post I will analyze the link between these three concepts and in particular how freedom strengthens security and privacy with real world examples.

Do Many Eyes Make Security Bugs Shallow?

A core tenet of the Free Software movement is “many eyes make bugs shallow.” This statement refers to the fact that with proprietary software you have a limited amount of developers who are able to inspect the code. With Free Software, everyone is free to inspect the code and as a result you end up with more people (and more diverse people) looking at the code. These diverse eyes are more likely to find bugs than if the code were proprietary.

Some people extend this idea to say that many eyes also make security bugs shallow. To that I offer the following counterpoint: OpenSSL, Bash and Imagemagick. All three of these projects are examples where the code was available for everyone to inspect, but each project had critical security bugs hiding inside of the code for years before it was found. In particular in the case of Imagemagick, I’m all but certain that security researchers were motivated by the recent bugs in OpenSSL and Bash to look for bugs in other Free Software projects that were included in many embedded devices. Now before anyone in the proprietary software world gets too smug, I’d also like to offer a counter-counterpoint: Flash, Acrobat Reader and Internet Explorer. All three of these are from a similar vintage as the Free Software examples and all three are great examples of proprietary software projects that have a horrible security track record.

So what does this mean? For security bugs, it’s not sufficient for many eyes to look at code–security bugs need the right eyes looking at the code. Whether the researcher is fuzzing a black box, reverse engineering a binary, or looking directly at the source code, security researchers will find bugs if they look.

When Security Reduces Freedom

At Purism we not only develop hardware, we also develop the PureOS operating system that runs on our hardware. PureOS doesn’t have to run on Purism hardware, however, and we’ve heard from customers who use PureOS on other laptops and desktops. Because of this, we sometimes will test out PureOS on other hardware to see how it performs. One day, we decided to test out PureOS on a low-end lightweight notebook, yet when we went to launch the installer, we discovered that the notebook refused to boot it! It turns out that Secure Boot was preventing the PureOS installer from running.

What is Secure Boot and why is it problematic?

Secure Boot is a security feature added to UEFI systems that aims to protect systems from malware that might attack the boot loader and attempt to hide from the operating system (by infecting it while it boots). Secure Boot works by requiring that any code it runs at boot time be signed by a certificate from Microsoft or from vendors that Microsoft has certified. The assumption here is that an attacker would not be able to access the private keys from Microsoft or one of its approved vendors to be able to sign its own malicious code. Because of that, Secure Boot can prevent the attacker from running code at boot.

When Secure Boot was first announced, the Linux community got in quite an uproar over the idea that Microsoft would be able to block Linux distributions from booting on hardware. The counter-argument was that a user could also opt to disable Secure Boot in the UEFI settings at boot time and boot whatever they want. Some distributions like Red Hat and Ubuntu have taken the additional step of getting their boot code signed so you can install either of those distributions even with Secure Boot enabled.

Debian has not yet gotten their boot code signed for Secure Boot and since PureOS is based off of Debian, this also means it cannot boot when UEFI’s Secure Boot is enabled. You might ask what the big deal was since all we had to do is disable Secure Boot and install PureOS. Unfortunately, some low-cost hardware saves costs by loading a very limited UEFI configuration that doesn’t give you the full range of UEFI options such as changing Secure Boot. That particular laptop fell into this category so we couldn’t disable Secure Boot and as a result we couldn’t install our OS–we were limited to operating systems that partnered with Microsoft and its approved vendors.

Secure Booting: Now with Extra Freedom

It’s clear that protecting your boot code from tampering is a nice security feature, but is that possible without restricting your freedom to install any OS you want? Isn’t the only viable solution having a centralized vendor sign approved programs? It turns out that Free Software has provided a solution in the form of Heads, a program that runs within a Free Software BIOS to detect the same kind of tampering Secure Boot protects you from, only with keys that are fully under your control!

The way that Heads works is that it uses a special independent chip on your motherboard called the TPM to store measurements from the BIOS. When the system boots up, the BIOS sends measurements of itself to the TPM. If those measurements match the valid measurements you set up previously, it unlocks a secret that Heads uses to prove to you it hasn’t been tampered with. Once you feel confident that Heads is safe, you can tell it to boot your OS and Heads will then check all of the files in the /boot directory (the OS kernel and supporting boot files) to make sure they haven’t been tampered with. Heads uses your own GPG key signatures to validate these files and if it detects anything has been tampered with, it sends you a warning so you know not to trust the machine and not to type in any disk decryption keys or other secrets.

With Heads, you get the same kind of protection from tampering as Secure Boot, but you can choose to change both the TPM secrets and the GPG keys Heads uses at any time–everything is under your control. Plus since Heads is Free Software, you can customize and extend it to behave exactly as you want, which means an IT department could customize it to tell the user to turn the computer over to IT if Heads detects tampering.

When Security without Freedom Reduces Privacy

Security is often used to protect privacy, but without freedom, an attacker can more easily subvert security to exploit privacy. Since the end-user can’t easily inspect proprietary firmware, an attacker who can exploit that firmware can implant a backdoor that can go unseen for years. Here are two specific examples where the NSA took advantage of this so they could snoop on targets without their knowing.

NSA Backdoors in Cisco Products: Glenn Greenwald was one of the reporters who initially broke the Edward Snowden NSA story. In his memoir of those events, No Place to Hide, Greenwald describes a new NSA program where the NSA would intercept Cisco products that were shipping overseas, plant back doors in them, then repackage them with the factory seals. The goal was to use those back doors to snoop on otherwise protected network traffic going over that hardware. Update: Five new backdoors have been discovered in Cisco routers during the beginning of 2018, although whether they were intentional or accidental has not been determined.

Glenn Greenwald was one of the reporters who initially broke the Edward Snowden NSA story. In his memoir of those events, No Place to Hide, Greenwald describes a new NSA program where the NSA would intercept Cisco products that were shipping overseas, plant back doors in them, then repackage them with the factory seals. The goal was to use those back doors to snoop on otherwise protected network traffic going over that hardware. Update: Five new backdoors have been discovered in Cisco routers during the beginning of 2018, although whether they were intentional or accidental has not been determined. NSA Backdoors in Juniper Products: Just in case you are on Team Juniper instead of Team Cisco, it turns out you weren’t excluded. The NSA is suspected in a back door found in Juniper firewall products within its ScreenOS that had been there since mid-2012. The backdoor allowed admin access to Juniper firewalls over SSH and also enabled the decryption of VPN sessions within the firewall–both very handy if you want to defeat the privacy of people using those products.

While I picked on network hardware in my examples, there are plenty of other examples outside of Cisco, Juniper, and the NSA where because of a disgruntled admin, a developer bug, or paid spyware, a backdoor or default credentials showed up inside proprietary firmware in a security product. The fact is, this is a difficult if not impossible problem to solve with proprietary software because there’s no way for an end user to verify that the software they get from their vendor matches the source code that was used to build it, much less actually audit that source code for back doors.

When Freedom Protects Security and Privacy

The Free Software movement is blazing the trail for secure and trustworthy software via the reproducible builds initiative. For the most part, people don’t install software directly from the source code but instead a vendor takes code from an upstream project, compiles it, and creates a binary file for you to use. In addition to a number of other benefits, using pre-compiled software saves the end user both the time and the space it would take to build software themselves. The problem is, an attacker could inject their own malicious code at the software vendor and even though the source code itself is Free Software, their malicious code could still hide inside the binary.

Reproducible builds attempt to answer the question: “does the binary I get from my vendor match the upstream source code that was used to build it?” This process uses the freely-available source code from a project to test for any tampering that could have happened between the source code repository, the vendor, and you making sure that a particular version of source code will generate the same exact output each time it is built, regardless of the system that builds it. That way, if you want to verify that a particular piece of software is safe, you can download the source code directly from the upstream developer, build it yourself, and once you have the binary you can compare your binary with the binary you got from your vendor. If both binaries match, the code is safe, if not, it could have been tampered with.

Debian is working to make all of its packages reproducible and software projects such as Arch, Fedora, Qubes, Heads, Tails, coreboot and many others are also working on their own implementations. This gives the end user an ability to detect tampering that would be impossible to detect with proprietary software since by definition there’s no way for you to download the source code and validate it yourself.

Freedom, Security and Privacy in Your Pocket

Another great example of the interplay between freedom, security and privacy can be found by comparing the two operating systems just about everyone carries around with them in their pockets: iOS and Android. Let’s rate the freedom, security and privacy of both of these products on a scale of 1 to 10.

In the case of iOS, it’s pretty safe to say that the general consensus puts iOS security near the top of the scale as it often stands up to government-level attacks. When it comes to privacy, we only really have Apple’s marketing and other public statements to go by, however because they don’t seem to directly profit off of user data (although apps still could), we can cut them a bit of a break. When it comes to freedom, however, clearly their walled garden approach to app development and their tight secrecy around their own code gives them a low rating so the end result is:

Security: 9

Privacy: 6

Freedom: 1

Now let’s look at Android. While I’m sure some Android fans might disagree, the general consensus among the security community seems to be that Android is not as secure as iOS so let’s put their security a bit lower. When it comes to freedom, if you dig far enough into Android you will find a gooey Linux center along with a number of other base components that Google is using from the Free Software community such that outside parties have been able to build their own stripped-down versions of Android from the source code. While you have the option to load applications outside of Google’s Play Store, most of the apps you will find there along with almost all of Google’s own apps are proprietary, so their freedom rating is a mixed-bag. When it comes to privacy though, I think it’s pretty safe to rate it very low, given the fundamental business model behind Android is to collect and sell user data.

Security: 7

Freedom: 5

Privacy: 1

Over the long run, the Librem line of products aims to address these concerns.

Why Not All Three?

To protect your own security and privacy, you need freedom and control. Without freedom, security and privacy require the full trust of vendors. However, vendors don’t always have your best interests at heart; in fact, in many cases vendors have a financial incentive to violate your interests, especially when it comes to privacy. The problem is, with proprietary software it can be difficult to prove a vendor is untrustworthy and if you do prove it, it’s even harder to revoke that trust.

With Free Software products, you have control of your trust. You also have the ability to verify that your Free Software vendors are trustworthy. With reproducible builds, you can download the source code and verify it all yourself.

In the end, freedom results in stronger security and privacy. These three concepts aren’t just interrelated, but they are interdependent. As you increase freedom, you increase security and privacy and when you decrease freedom, you put security and privacy at risk. This is why we design all of our products with freedom, security and privacy as strict requirements and continue to work toward increasing all three in everything we do.