I have a healthy level of paranoia given the territory I inhabit. When you write things about hackers and government agencies and all that, you simply have a higher level of skepticism and caution about what lands in your e-mail inbox or pops up in your Twitter direct messages. But my paranoia is also based on a rational evaluation of what I might encounter in my day-to-day: it's based on my threat model.

In the most basic sense, threat models are a way of looking at risks in order to identify the most likely threats to your security. And the art of threat modeling today is widespread. Whether you're a person, an organization, an application, or a network, you likely go through some kind of analytical process to evaluate risk.

Threat modeling is a key part of the practice people in security often refer to as "Opsec." A portmanteau of military lineage originally meaning "operation security," Opsec originally referred to the idea of preventing an adversary from piecing together intelligence from bits of sensitive but unclassified information, as wartime posters warned with slogans like "Loose lips might sink ships." In the Internet age, Opsec has become a much more broadly applicable practice—it's a way of thinking about security and privacy that transcends any specific technology, tool, or service. By using threat modeling to identify your own particular pile of risks, you can then move to counter the ones that are most likely and most dangerous.

Threat modeling doesn't have to be rocket science. Most people already (consciously or subconsciously) have a threat model for the physical world around them—whether it's changing the locks on the front door after a roommate moves out or checking window locks after a burglary in the neighborhood. The problem is that very few people pay any sort of regular attention to privacy and security risks online unless something bad has already happened.

That's not from a lack of effort by employers and industry. Collectively, society spends billions on information security each year, and it's commonplace for employees of all sorts to go through some kind of digital security training these days. But neither the security industry nor the media have helped normalize threat modeling. The public regularly gets bombarded with bits of tradecraft (or worse, security "folkways") every day—every time a new malware threat emerges, a television journalist will inevitably tell viewers that their best protection is "a complex password."

And though it's easy to find advice on how to "stay safe" digitally, much of the good advice doesn't seem to really stick. Perhaps it's because that advice doesn't always match with the actual needs of the people looking for it.

"There's a lot of stuff going on, and we as technologists tend to jump to advice like 'use Signal' or 'use Tor' without asking, 'what matters to you?'" said Adam Shostack, who developed tools and methodologies for developers to do threat modeling for their software while at Microsoft. Shostack helped develop the CVE standard for tracking software vulnerabilities and is now an independent author, a consultant, and a member of the Black Hat Review Board.

Demystifying the threat model

Recently, Shostack has been working with the Seattle Privacy Coalition (SPC) on a privacy threat model for the people of Seattle based on Shostack's approach to threat modeling for software developers. Intended to demystify threat modeling for average people, Shostack's generalized approach boils down to a quartet of questions:

What are you doing? (The thing you're trying to do, and what information is involved.) What can go wrong? (How what you're doing could expose personal information in ways that are bad.) What are you going to do about it? (Identifying changes that can be made in technology and behavior to prevent things from going wrong.) Did you do a good job? (Re-assessing to see how much risk was reduced.)

What Shostack's approach doesn't directly address are the specific sources of threats to privacy and security. That's something Shostack doesn't see as being particularly helpful, since that part of threat modeling isn't necessarily something the average person can deal with. "Telling people to be anxious all the time does little good," he said.

But other security experts Ars spoke with felt that understanding what types of threats a person is most likely to encounter is a key part of building a personal threat model—one along the lines of the Electronic Frontier Foundation's five-question structure:

What do you want to protect? (The data, communications, and other things that could cause problems for you if misused.) Who do you want to protect it from? (The people, organizations, and criminal actors who might seek access to that stuff.) How likely is it that you will need to protect it? (Your personal level of exposure to those threats.) How bad are the consequences if you fail? How much trouble are you willing to go through in order to try to prevent those? (The money, time and convenience you're willing to dispense with to protect those things.)

I've tried to consolidate the two approaches above into a set of steps for the average mortal—or at least, for someone helping the average mortal. The Ars Threaty Threat Assessment Model (or, as some readers have demanded, the Ars Threaty McThreatface Assessment Model) squeezes it all into three compound questions and a shampoo bottle instruction:

Who am I, and what am I doing here?

Who or what might try to mess with me, and how?

How much can I stand to do about it?

Rinse and repeat.

For the TL;DR, you could skim ahead to "how much can I stand to do about it?" But with threats constantly changing and evolving, helping people first understand how to assess their risks leads to better security in the long term compared to merely following a quick set of tips. It's the teach a person to fish approach, and it starts with a simple question.