January 14, 2017

WhatsApp was the subject of a recent Guardian article making claims of a “backdoor” stemming from a “bug” in the way WhatsApp handles key rotations for users. The problem? WhatsApp will automatically transmit messages after the recipient’s key has changed without first asking the sender to confirm the new key is genuine.

Far from being a “bug” or “backdoor” (a claim so wrong I am sure hoping the original author of the story Samuel Gibbs will issue a retraction), handling key rotation seamlessly is a difficult problem with a long-storied history, along with many attempts to surface such information to the user in order to ask them to make a security decision, such as in the SSH screenshot above.

Clearly an in-person exchange of key fingerprints is the most secure option for establishing a secure channel, but is inconvenient, often impractical, and doesn’t provide a good means for handling key rotation.

There are two basic models to solving the key distribution and rotation problem in an automated manner that don’t require in-person confirmations with PGP-like “key signing parties” (and I’m sorry to say it PGP/GPG fans but I too don’t think GPG is the future). These methods to determine trusted keys are as follows:

The SSL/TLS PKI Way (used by every https:// web site): Farm trust out to central services (i.e. certificate authorities). Keys can be rotated without your consent or knowledge. Your browser will seamlessly make the same request to a server even if its key has just changed. In messaging apps this generally involves placing a lot of trust in the key servers for the messaging app operators, although systems like CONIKS and Google’s recently announced Key Transparency provide tools for helping keep key servers honest.

(used by every https:// web site): Farm trust out to central services (i.e. certificate authorities). Keys can be rotated without your consent or knowledge. Your browser will seamlessly make the same request to a server even if its key has just changed. In messaging apps this generally involves placing a lot of trust in the key servers for the messaging app operators, although systems like CONIKS and Google’s recently announced Key Transparency provide tools for helping keep key servers honest. The SSH Way (a.k.a. “trust on first use”): record the last known key for an SSH server. If it changes, print a big scary warning (see above). Only after accepting the new key can you connect to the remote host, possibly sending a sensitive credential like a password. (Note that SSH also supports a certificate authority-based PKI)

The main difference between these two approaches is an automated decision as to whether we should automatically trust keys after they have been rotated. In the PKI approach used by the web the new key is automatically trusted, whereas in the SSH TOFU model the user is presented a warning upon any key change.

At first glance the SSH approach might seem to offer some better security properties: the user is notified whenever a key changes, allowing them to do some due diligence on whether the key change was expected or if a man-in-the-middle attack is underway.

Unfortunately, as any of you who are familiar with the warning at the top of the post are familiar with, most key change events are mundane and not an attack. This makes prompting the user for every key change a bit “boy who cried wolf”: the encrypted client is asking the user to make a security decision, but typically the user isn’t under attack (or can’t tell they are) and will therefore almost always accept the new key.



Image originally from Peter Gutmann’s unreleased book Engineering Security

Is prompting the user a good idea in this case? Can the user be expected to make a reasonable security decision? Will the constant prompts whenever someone gets a new phone be too annoying? These are difficult questions.

Open Whisper Systems, the non-profit behind WhatsApp’s encrypted messaging protocol, has taken both approaches: WhatsApp does not prompt users by default, relegating such prompts to an optional setting that WhatsApp users must opt into. However, its in-house messaging app Signal does prompt by default.

Why take two approaches? Isn’t one of them “right”? There is definitely a vocal tinfoil hat crowd who would consider anything less than the Signal approach by default as unacceptable. And while this crowd may generally do a good job reasoning about secure systems as technologies, they have generally lacked the empathy to consider such factors as “Can a normal person actually use this? Will they make a reasonable security decision?”

The net result of optimizing for a pathological threat model at the cost of user experience has had a pretty clear security outcome: users would choose insecure tools over the “secure” ones, because the “secure” ones are impossible to use. It takes a lot of empathy to consider whether prompting someone who does not understand what the concept of a cryptographic key even is about how they want to handle a key rotation event will actually have a net positive security outcome.

There aren’t any right answers here: only tradeoffs.

Signal targets a different audience than WhatsApp: they assume out of the gate that you want a more secure, encrypted messenger. WhatsApp, on the other hand, shipped encryption-by-default that the end user doesn’t even have to be aware of. Where Signal targets an audience of millions, WhatsApp is targeting an audience of billions. The (adjustable) defaults in WhatsApp are designed so encryption can be on-by-default at no cost to the user experience, but still allow those who would like to receive security notifications to receive them by opting in.

Consider what web browsers would be like if they prompted a user to make a security decision whenever the key for a site changed:

I do not think asking users to make decisions like this would tangibly improve the security of the web. However, I do think it would scare people away from visiting sites in the first place.

Now I’d like to take a bit to talk about crypto reporting…

If there were a backdoor in a popular encrypted messaging app, that is big news, and it should be reported on.

This was not a backdoor. I think, had this story been run by a few security experts in advance, most would’ve told you that it is not a backdoor. First, let me say that if you are a reporter sitting on a story like this and are looking for opinions in advance before a release, I’d be happy to offer mine or find an interested cryptographer to put you in touch with. I would really love to help close the gap between reporters and security experts.

But second, there is a recent story about encrypted messaging apps worth reporting on:

The recently released, highly disputed, but increasingly credible looking Trump dossier contained this tidbit about the messaging app Telegram.

Though Telegram is often described by the media as “ultra-secure”, it has default settings considerably worse than anything in WhatsApp: end-to-end encryption in Telegram is off by default, whereas most major messaging apps (including Apple’s iMessage) always encrypt end-to-end.

It’s unclear exactly what security problem with Telegram the dossier above is referring to. Telegram has a history of being exploited through the SS7 network (an attack which works equally well against WhatsApp). But with end-to-end encryption off by default, and the poor quality of the cryptographic design even when end-to-end encryption is on, Telegram leaves a lot of opportunity for novel attacks, especially the kind perpetrated by nation state agencies.

I’m sure there’s an interesting story here, and one far closer to a legitimate security problem in a major messaging app than how WhatsApp handles key rotation.

130 Kudos