In a recent column, security expert Bruce Schneier proposed breaking up the NSA – handing its offensive capabilities work to US Cyber Command and its law enforcement work to the FBI, and terminating its programme of attacking internet security. In place of this, Schneier proposed that “instead of working to deliberately weaken security for everyone, the NSA should work to improve security for everyone.” This is a profoundly good idea for reasons that may not be obvious at first blush.

People who worry about security and freedom on the internet have long struggled with the problem of communicating the urgent stakes to the wider public. We speak in jargon that’s a jumble of mixed metaphors – viruses, malware, trojans, zero days, exploits, vulnerabilities, RATs – that are the striated fossil remains of successive efforts to come to grips with the issue. When we do manage to make people alarmed about the stakes, we have very little comfort to offer them, because Internet security isn’t something individuals can solve.

I remember well the day this all hit home for me. It was nearly exactly a year ago, and I was out on tour with my novel Homeland, which tells the story of a group of young people who come into possession of a large trove of government leaks that detail a series of illegal programmes through which supposedly democratic governments spy on people by compromising their computers. I kicked the tour off at the gorgeous, daring Seattle Public Library main branch, in a hi-tech auditorium to an audience of 21st-century dwellers in one of the technology revolution’s hotspots, home of Microsoft and Starbucks (an unsung technology story – the coffee chain is basically an IT shop that uses technology to manage and deploy coffee around the world).

I explained the book’s premise, and then talked about how this stuff works in the real world. I laid out a parade of awfuls, including a demonstrated attack that hijacked implanted defibrillators from 10 metres’ distance and caused them to compromise other defibrillators that came into range, implanting an instruction to deliver lethal shocks at a certain time in the future. I talked about Cassidy Wolf, the reigning Miss Teen USA, whose computer had been taken over by a “sextortionist” who captured nude photos of her and then threatened to release them if she didn’t perform live sex shows for him. I talked about the future of self-driving cars, smart buildings, implanted hearing aids and robotic limbs, and explained that the world is made out of computers that we put our bodies into, and that we put inside our bodies.

These computers are badly secured. What’s more, governments and their intelligence agencies are actively working to undermine the security of our computers and networks. This was before the Snowden revelations, but we already knew that governments were buying “zero-day vulnerabilities” from security researchers. These are critical bugs that can be leveraged to compromise entire systems. Until recently, the normal response to the discovery of one of these “vulns” was to report them to the vendor so they could be repaired.

But spy-agencies and law-enforcement have created a bustling marketplace for “zero-days,” which are weaponised for the purpose of attacking the computers and networks of “bad guys”. The incentives have shifted, and now a newly discovered bug had a good chance of remaining unpatched and live in the field because governments wanted to be able to use it to hack their enemies.

Scientists formulate theories that they attempt to prove through experiments that are reviewed by peers, who attempt to spot flaws in the reasoning and methodology. Scientific theories are in a state of continuous, tumultuous improvement as old ideas are overturned in part or whole, and replaced with new ones.

Security is science on meth. There is a bedrock of security that is considered relatively stable – the mathematics of scrambling and descrambling messages – but everything above that bedrock has all the stability of a half-set custard. That is, the best way to use those stable, well-validated algorithms is mostly up for grabs, as the complex interplay of incompatible systems, human error, legacy systems, regulations, laziness, recklessness, naivete, adversarial cunning and perverse commercial incentives all jumble together in ways that open the American retailer Target to the loss of 100m credit card numbers, and the whole internet to GCHQ spying.

As Schneier says: “Anyone can design a security system that works so well that he can’t figure out how to break it.” That is to say, your best effort at security is, by definition, only secure against people who are at least as dumb as you are. Unless you happen to be the smartest person in the world, you need to subject your security system to the kind of scrutiny that scientists use to validate their theories, and be prepared to incrementally patch and refactor things as new errors are discovered and reported.

Hence: “Security is a process, not a product” – another useful Schneierism. This is a distinction that sets security engineering apart from other engineering disciplines. Other kinds of engineers exist in a changing world, but security’s change is of a different sort altogether. For example, structural engineering is a field under continuous improvement, and tomorrow’s structural engineers will be able to apply better techniques to their work, but no one worries that someone will invent a way of making skyscrapers collapse tomorrow through application of an easily automated, low-cost technique.

The difference is that security engineering is an adversarial discipline. A structural engineer must contend with the forces of entropy and gravity, of harsh winds and rising seas. But a security engineer must contend with enemy security engineers who labour every hour of every day to find flaws in her work and use those flaws to covertly undermine it.

I think there’s a good case security engineering not being “engineering” at all. Engineers try to erect and maintain infrastructure against threats that are indifferent to them. These threats may be powerful – floods in the Philippines, earthquakes in Haiti – but they aren’t deliberate. The world doesn’t have a will. It doesn’t care if the earth shakes or not.

But security adversaries want to break security. They lack the relentless force of physics, but exert something totally nonphysical: cunning.

Security as an exercise in public health

If security isn’t engineering, what is it?

I think there’s a good case to be made for security as an exercise in public health. It sounds weird at first, but the parallels are fascinating and deep and instructive.

Last year, when I finished that talk in Seattle, a talk about all the ways that insecure computers put us all at risk, a woman in the audience put up her hand and said, “Well, you’ve scared the hell out of me. Now what do I do? How do I make my computers secure?”

And I had to answer: “You can’t. No one of us can. I was a systems administrator 15 years ago. That means that I’m barely qualified to plug in a WiFi router today. I can’t make my devices secure and neither can you. Not when our governments are buying up information about flaws in our computers and weaponising them as part of their crime-fighting and anti-terrorism strategies. Not when it is illegal to tell people if there are flaws in their computers, where such a disclosure might compromise someone’s anti-copying strategy.

But: If I had just stood here and spent an hour telling you about water-borne parasites; if I had told you about how inadequate water-treatment would put you and everyone you love at risk of horrifying illness and terrible, painful death; if I had explained that our very civilisation was at risk because the intelligence services were pursuing a strategy of keeping information about pathogens secret so they can weaponise them, knowing that no one is working on a cure; you would not ask me ‘How can I purify the water coming out of my tap?’”

Because when it comes to public health, individual action only gets you so far. It doesn’t matter how good your water is, if your neighbour’s water gives him cholera, there’s a good chance you’ll get cholera, too. And even if you stay healthy, you’re not going to have a very good time of it when everyone else in your country is striken and has taken to their beds.

If you discovered that your government was hoarding information about water-borne parasites instead of trying to eradicate them; if you discovered that they were more interested in weaponising typhus than they were in curing it, you would demand that your government treat your water-supply with the gravitas and seriousness that it is due.

The public health analogy is suprisingly apt here. The public health threat-model is in a state of continuous flux, because our well-being is under continuous, deliberate attack from pathogens for whom we are, at best, host organisms, and at worst, dinner. Evolution drives these organisms to a continuously shifting array of tactics to slide past our defenses.

Public health isn’t just about pathogens, either – its thorniest problems are about human behaviour and social policy. HIV is a blood-borne disease, but disrupting its spread requires changes to our attitudes about sex, pharmaceutical patents, drugs policy and harm minimisation. Almost everything interesting about HIV is too big to fit on a microscope slide.

And so it is for security: crypto is awesome maths, but it’s just maths. Security requires good password choice, good password management, good laws about compelled crypto disclosure, transparency into corporate security practices, and, of course, an end to the governmental practice of spending $250M/year on anti-security sabotage through the NSA/GCHQ programmes Bullrun and Edgehill.

Everything involves the internet



But for me, the most important parallel between public health and internet security is their significance to our societal wellbeing. Everything we do today involves the internet. Everything we do tomorrow will require the internet. If you live near a nuclear power plant, fly in airplanes, ride in cars or trains, have an implanted pacemaker, keep money in the bank, or carry a phone, your safety and well-being depend on a robust, evolving, practice of network security.

This is the most alarming part of the Snowden revelations: not just that spies are spying on all of us – that they are actively sabotaging all of our technical infrastructure to ensure that they can continue to spy on us.

There is no way to weaken security in a way that makes it possible to spy on “bad guys” without making all of us vulnerable to bad guys, too. The goal of national security is totally incompatible with the tactic of weakening the nation’s information security.

“Virus” has been a term of art in the security world for decades, and with good reason. It’s a term that resonates with people, even people with only a cursory grasp of technology. As we strive to make the public and our elected representatives understand what’s at stake, let’s expand that pathogen/epidemiology metaphor. We’d never allow MI5 to suppress information on curing typhus so they could attack terrorists by infecting them with it. We need to stop allowing the NSA and GCHQ to suppress information on fixing bugs in our computers, phones, cars, houses, planes, and bodies.

If GCHQ wants to improve the national security of the United Kingdom – if the NSA want to impove the American national security – they should be fixing our technology, not breaking it. The technology of Britons and Americans is under continuous, deadly attack from criminals, from foreign spies, and from creeps. Our security is better served by armouring us against these threats than it is by undermining security so that cops and spies have an easier time attacking “bad guys.”