The Guardian was wrong to report in January that the popular messaging service WhatsApp had a security flaw so serious that it was a huge threat to freedom of speech.



But it was right to bring to wide public notice an aspect of WhatsApp that had the potential to make some messages vulnerable to being read by an unintended recipient.

The Guardian did not test with an appropriate range of experts a claim that had implications for the more than one billion people who use the Facebook-owned WhatsApp.

In a detailed review I found that misinterpretations, mistakes and misunderstandings happened at several stages of the reporting and editing process. Cumulatively they produced an article that overstated its case.

The Guardian ought to have responded more effectively to the strong criticism the article generated from well-credentialled experts in the arcane field of developing and adapting end-to-end encryption for a large-scale messaging service.

The original article – now amended and associated with the conclusions of this review – led to follow-up coverage, some of which sustained the wrong impression given at the outset. The most serious inaccuracy was a claim that WhatsApp had a “backdoor”, an intentional, secret way for third parties to read supposedly private messages. This claim was withdrawn within eight hours of initial publication online, but withdrawn incompletely. The story retained material predicated on the existence of a backdoor, including strongly expressed concerns about threats to freedom, betrayal of trust and benefits for governments which surveil. In effect, having dialled back the cause for alarm, the Guardian failed to dial back expressions of alarm.

This made a relatively small, expert, vocal and persistent audience very angry. Guardian editors did not react to an open letter co-signed by 72 experts in a way commensurate with the combined stature of the critics and the huge number of people potentially affected by the story. The essence of the open letter and a hyperlink to it were added to the article, but wider consultation and a fundamental reconsideration of the story were needed.

The aspect of WhatsApp at the heart of this matter, put very simply, is as follows. When a user of WhatsApp is offline, any messages at that time in transit to him or her are held in Facebook’s servers. (If unclaimed after 30 days they are deleted, Facebook advised.) If, while offline, the recipient registers a new device, any messages waiting for the person on Facebook’s servers are no longer deliverable because they are encrypted for the person’s old device. To prevent those messages from being lost, when the intended recipient comes back online, any in-transit messages are re-encrypted with the new device’s key and resent automatically. If senders have turned on a notification setting in WhatsApp on their phone, they are told that the key has changed, but not otherwise.

Critics said that the Guardian article overstated the risk of what is known in the jargon as a man-in-the-middle attack, in which a third party could exploit the combination of offline phone, messages in transit and changed key to intercept private communications.

The intensity of the external criticism seems to have stemmed partly from frustration that the article was in the Guardian, with its deserved reputation for using and reporting on technology in ways that help bridge the gap between non-experts and specialists.

Some were concerned that an overstated claim in the Guardian about risk to the security of WhatsApp would cause non-expert users to become unduly worried and needlessly abandon WhatsApp for less secure communication methods.

The critics had two different groups of WhatsApp users in mind.

First, the vast majority of users, for whom WhatsApp security is fine and who would probably experience worse communications security if they stopped using it.

Second, a much smaller group, activists in authoritarian regimes, whose threat model means they need high security and for whom the popularity of WhatsApp means they can “blend into the crowd”. This group could be at serious risk if they abandoned WhatsApp for something less secure, or perhaps even for a service more secure but used by relatively few, because that might attract the authorities’ attention.

A related concern was that authoritarian governments could exploit the Guardian story to move against the use of encrypted messaging.

During the review I independently confirmed that a Turkish government official had used the article when, in effect, attempting to deter users from WhatsApp. It was also confirmed that some activists who had been in the process of switching from a less secure messaging service to WhatsApp became confused by the Guardian story. On available evidence, I am unable to conclude whether these effects were widespread. But one case is one too many.

I am not an expert in this field. For the review I consulted suitably experienced experts other than the 72 who had already declared their view.

I found a consensus that in practice, as WhatsApp is presently understood, it would be very difficult to use this aspect of WhatsApp for systematic targeted surveillance. Experts described challenges with timing, targeting and concealment which they said would be formidable even for a major private or public actor with access to Facebook servers.

Experts were unanimous that it is not a backdoor. “Vulnerability” is a contested term, acceptable in this context to some but not all. One called it a “weakness”, others a “trade-off”.

One expert called WhatsApp “pretty excellent security against broad surveillance and that is valuable”. Another said the trade-off was reasonable. A third said: “This product is safe for the majority of people.”

I conveyed to Facebook one specialist’s proposal that the relevant notification setting should be default opt-out, not default opt-in, so that although in-transit messages would continue to be resent automatically, at least senders would be informed that it had happened unless they turned the notification setting off. Facebook replied: “Yes, WhatsApp considered this but decided against it when considering how it might be interpreted by the average user, who may be using a smartphone for the first time. Instead we chose to make this option available to those wanting an extra layer of security on their accounts.”

Experts confirmed that weighing these kinds of considerations about user behaviour is normal when designers are deciding the balance between security and usability in a mass-market service like WhatsApp. The experts confirmed the gist of the responses to the core Guardian article blogged on the day of its publication by two leading figures in developing WhatsApp, Moxie Marlinspike and Brian Acton.

Several independent experts agreed with this part of the open letter’s criticism when it was read to them:

“It’s important to recognize the tendency of many security researchers, especially inexperienced ones, to overestimate the practical impact of vulnerabilities they find, and being ignorant how security needs play out in the real world, the massive amount of work on actual user behavior in the face of friction and warnings, and the need for independent verification and context.”

I accept the consensus view of the experts and, in consultation with editors, have arranged for the coverage to be amended and for a note to be added drawing attention to the review and linking to this column.

I do not agree with critics that the story should be entirely retracted. After all the surrounding controversy, the story’s complete disappearance would be odd. More importantly, there is a clear public interest in the Guardian preserving (with appropriate amendments) reporting on this aspect of WhatsApp and the inherent trade-off. With so many users, watchfulness, news, analysis and debate about the security of WhatsApp are important. Communications privacy is a key element of human rights in the digital era, and developments affecting it ought to be reported. Also, Facebook needs scrutiny, both generally due to its power and specifically in its management of WhatsApp.

Facebook’s plan last year to share within its group of companies WhatsApp users’ personal information caused controversy and appeared to breach prior commitments. Regulators stepped in, including in the UK. At least one court has upheld a regulator’s intervention.

As one expert noted, Facebook’s servers are a black box. It is not possible for outsiders to verify exactly what happens when messages pass through Facebook’s servers on their way to and from WhatsApp users. Trust is required. Given Facebook’s record, steady scrutiny is the right stance.

We learned from Edward Snowden’s disclosures that large service providers which handle the communications of millions of people have been compromised in the past. Sections of a research paper published in March 2017 provide a more recent review of encryption workarounds.

The UK home secretary, Amber Rudd, has wondered aloud whether service providers should continue to be able lawfully to deny to government authorities access to encrypted messaging. Measures affecting the security of encrypted messaging seem likely to be among new anti-terrorism proposals.

I found a consistent view among experts that Facebook could do more to make non-expert users aware of the trade-off and educate them about how to assess their own threat model.

Since the Guardian article, WhatsApp has been better secured by the introduction of optional two-factor verification in February. That process was being tested in November 2016 and had been reported in a specialist blog in July 2016, so the Guardian coverage did not prompt the improvement. We will probably never know whether it spurred Facebook to accelerate the process.