When users of Lavabit, an encrypted e-mail service, logged on to the site this past August, they found a bewildering letter on the site’s main page. Ladar Levison, the founder and sole employee of Lavabit, had shut down his business rather than “become complicit in crimes against the American people.” Lavabit subscribers would later discover that Levison had walked away because federal investigators had asked him to hand over his master decryption key, which would have granted them unfettered access to most of Lavabit’s data. Shortly afterward, the encryption provider Silent Circle followed suit, summarily deleting its users’ stored mail and mothballing its e-mail servers. In the wake of the Snowden revelations, which should have driven demand for their services, encrypted e-mail providers were, in the United States at least, rapidly becoming an endangered species. This leads to a question that has received relatively little attention: Why is encrypted e-mail so rare in the first place?

More than ninety-five per cent of all e-mail flowing over the Internet today does so in a vulnerable, unencrypted form. Yet the technology used to encrypt e-mail is hardly new: in 1991, a hacker named Philip Zimmermann uploaded a free encryption program, modestly called Pretty Good Privacy, to the Internet. Better known today as P.G.P., it was nothing short of revolutionary. For the first time in history, the average citizen had access to encryption that even the N.S.A. couldn’t break. Whereas e-mail before P.G.P. was like sending an open-faced postcard through the mail, every P.G.P.-encrypted message enjoyed privacy on par with the fabled Zero Halliburton diplomatic attaché case, even if what you e-mailed was only a picture of your cat.

The most basic form of encryption is based on a single secret code, which functions as a key that allows the reader to unlock the sender’s scrambled message; anyone who gets ahold of that key could unscramble the message. But P.G.P. and its successors use a form of cryptography known as public-key cryptography, which was developed in the nineteen-seventies. Users have two keys: a public key that can be shared, which encrypts messages that are sent to them, and one that they keep private, to decrypt the messages they receive. Prior to the release of P.G.P., public-key cryptography was generally reserved for military and government use—and was viewed as inappropriate for individual users on personal computers. P.G.P. made these advanced encryption algorithms available to the masses.

Free-speech advocates rejoiced; lawmakers panicked. Zimmermann was investigated by a grand jury on charges of “exporting a munition,” the same charge that would be levelled against an international arms dealer. Many observers predicted one of two extreme outcomes: either Zimmermann would win, and all of our communications would be encrypted—making them entirely opaque to governments and law-enforcement agencies—or governments would get their way and only “government-approved” encryption technologies would survive. Neither option happened. The Justice Department did decide to back off, handing Zimmermann and his fellow pro-cryptography activists, or “cypherpunks,” what appeared to be an overwhelming political and legal victory. But it turned out to be somewhat hollow: the government had given up partly because it realized that encryption wasn’t going mainstream at all.

The main reason for this is as sad as it is simple: encrypting e-mail is just hard. Before you can send your friend an encrypted message, she must first install the software, generate an encryption key pair, and deliver the public portion to you. You must then download and install that key on your own computer and verify that it’s the right key—not a fake one sent to trick you. You must repeat this process for everyone else you want to talk to. And that’s before sending a single message.

While this might not seem terribly onerous, it’s more than enough to dissuade most users. When it comes to technology people use every day, even a tiny bit of extra complexity can spell doom. In one famous study, researchers from Carnegie Mellon and U.C. Berkeley asked a group of tech-savvy volunteers to try P.G.P. Some participants just gave up. Others took actions that completely negated the purpose of encryption, like e-mailing their secret decryption key instead of the message. (To put this in everyday terms: that’s like buying a fancy safe, then taping the combination to the door.) Few continued using it in the long term.

What’s more, even when you do get encryption working correctly, it still leaves a lot of useful data vulnerable to eavesdroppers. For example, tools like P.G.P. don’t encrypt the “metadata” associated with e-mails—stuff like dates, e-mail addresses, and even subject lines. In some instances this is inevitable—it’s hard to deliver e-mail without knowing the destination, after all—but in many cases it’s a result of the drive to stay compatible with regular e-mail, since encrypting the sending e-mail address and subject line would pose awkward problems for users who prefer to view their e-mail with standard mail programs. But, as we’re now learning, this metadata can be just as valuable as the message itself, if not more so. It’s also awfully difficult to protect this information while e-mail is in transit.

Lastly, there’s the problem of securing your decryption keys_._ These keys are everything. If you simply store them on your computer, you risk losing them and access to everything; meanwhile, you can access your e-mail from only one machine. But if you store them on a server for convenience—as Lavabit’s users did—an attacker who infiltrates this server (or the network leading into it) can potentially read all of your mail.

These problems aren’t intractable. It’s possible that talented engineers could solve them, with the help of companies like Apple, Microsoft, or Google, who provide mail services and applications to millions and millions of people. But they have little incentive to do so for consumers. While most mail programs support some sort of e-mail encryption (usually S/MIME), it’s often targeted at large enterprises, where an I.T. support staff can manage your keys. And for Google, which makes money by selling you ads based on the content of your e-mails, there’s not much of an upside in obscuring what they contain. So the conventional wisdom about mass e-mail encryption has remained unchanged for a decade or more: it’s a good idea whose time has just not come.

There is one possibility that it could change. Last month, Levison and Silent Circle proposed a new technological partnership named the Dark Mail Alliance. Very few details about Dark Mail are public yet, so we’ll have to wait and see how the technology works. But some early indicators are hopeful. For one, Dark Mail takes an entirely different approach from previous e-mail-encryption tools. It will encrypt both data and metadata, including e-mail subjects. Keys won’t be held long-term or ever stored on a server; they’ll be generated on the fly by your device and new ones will be periodically generated for each batch of e-mail—a technique that’s worked well for popular chat-encryption technologies like Off-the-Record Messaging. Thus, even if a Dark Mail provider is hacked or compelled to disclose your data, the government won’t get much more than a pile of encrypted bits; they’ll need to force you to disclose your password to unscramble them.