Facebook gets the lion’s share of attention these days in discussions around fake news, but its subsidiary WhatsApp can be a misinformation highway in its own right. The same factors that make sending messages and images so easy in WhatsApp also make it tantalizingly easy for people to pass along shocking or emotionally charged hearsay with the aggravating disclaimer “forwarded as received.”

These forwards run the gamut from the harmless to the horrifying, from well-meaning parents forwarding unfounded claims of DIY medical remedies to friends spreading inflammatory news without verifying its source or veracity. WhatsApp has been weaponized in Brazil to spread fake news. Companies and organizations supporting President Bolsonaro used WhatsApp extensively to spread false and deceptive to get their candidate elected. It turned out to be an incredibly successful tactic.

Several Asian countries are facing similar issues where WhatsApp has become a powerful misinformation tool due to the fact that it is a chief source of news for large sections of the population — Reuters Institute for the Study of Journalism found that 25% of people in Turkey say they use WhatsApp to share and discuss news in a given week. That number jumps to 46% in Brazil and 51% in Malaysia.

In some cases the spread of fake news even turns deadly — in India it has lead to several episodes of lynchings and mob violence. In one instance, a group of men from a nomadic community came to a town to participate in their Sunday market. Taking shade under a tree to eat and rest, one of them offered a biscuit to a passing girl. Unbeknownst to him, in the days prior to their visit, some images and videos were circulating in that town’s WhatsApp groups that warned of a kidnapping and organ-harvesting ring, and cautioned residents to be extremely vigilant.

The man’s simple kindness in the context of a storm of misinformation caused onlookers to assume the worst, and resulted in a horrific lynching which ended with the gruesome deaths of five of the men. The pictures and videos that had been circulating in this town turned out to have been taken out of context in an effort to alarm parents: the images of dead children the villagers had been forwarding one another were actually victims of a Syrian chemical attack a few years prior. There was no kidnapping/organ-harvesting ring.

What makes WhatsApp such a good conduit for fake news?

WhatsApp is unique because messages are encrypted, so WhatsApp has restricted visibility into the content being shared on its network. In a way this encryption also reinforces users’ belief in what they see on the app: there are no ads, no sponsored posts, no overt institutional messages. Everything is received from individuals, in personal messages or in group chats that users willingly participate in. It is human nature to extend credibility online to those we trust offline, but this trust is precisely what malevolent actors seek to take advantage of.

WhatsApp also enables those who originate such messages to hide behind layers of anonymity. Recipients of forwarded messages have no idea where they came from, and message originators are essentially anonymous after the first round of forwards, if not earlier. It is very hard for messages to get traced back to them.

Another key feature that makes WhatsApp a petri dish for fake news is that news travels privately, away from public scrutiny, and often in echo chambers. Nobody from the outside knows what has been shared, or who has received what. There’s no public debate like there is on sites like Twitter or Facebook. And in groups created specifically to spread propaganda, nobody can challenge it, or else they might get thrown out of the group or rebutted with even more propaganda.

And of course, there is human nature. Fake news is often alarming or shocking. The most effective fake news taps into the rawest of human emotions: anger, fear, and anxiety. Well-meaning, good-natured people will share evocative messages with friends and family because they want to warn or discuss, to protect their loved ones and to spread awareness of things that affect them at a primal level. Fake news is nothing more than weaponized rumors and manufactured gossip. The medium and the method are new to the modern age, but the impulse to spread the message is a deeply human flaw.

In WhatsApp’s defense, it has taken an active role in combating the spread of fake news on its network in recent months. It limited forwards to just 20 people in Brazil after forwards played a huge part in the elections there. In India, WhatsApp started a series of education and research initiatives, including a set of educational commercials like this one:

Educating users is a positive step, but it is not enough to rely on user behavior. Here are some changes WhatsApp can make to proactively combat fake news on its platform.

What WhatsApp can do

Update the forwarded tag

WhatsApp encrypts the content of messages, but it still knows who sent the message, to whom, and when. Forwarded messages in WhatsApp currently feature a small forwarded tag, but perhaps this can be made more prominent so as to make it easier to spot. Adding some metadata to it could help as well. For instance if tapping on the forwarded tag showed you the origin time and date — the first time that message had ever been forwarded in the chain that currently landed it in your messages — it could lead people to better understand whether something has been circulating for a long time (or if news that is too good or too bad to be true has magically come to light at a time when it is a little too politically convenient). Though location data can be spoofed, if WhatsApp determines that the message originated far away from your geographic location or area code, it could feature that information here too.

What if the Forwarded tag was visually bolder, and if tapping on it gave more information like the first time that message was shared?

Improve the process of getting added to a group

In Brazil, programs were being sold that legally aggregated phone numbers from databases, created WhatsApp groups that included those numbers as members, and used those groups to forward political messages.

Groups are a great feature of WhatsApp when they are voluntarily joined. To curtail this kind of programmatic spamming however, WhatsApp should re-imagine the group joining experience:

Require the person who added you to group to provide context or an invite message

Run some graph analysis on the group; if you don’t have many contacts in that group, WhatsApp should show a message to caution you against being added to groups by strangers. It could be something like, “This person is not in your contacts list. Be careful about being added to groups by strangers”

Don’t deliver messages until you accept being added to that group

Show information about link sources

Just like in Facebook, a little info icon on link previews can help people pay attention to the source. For websites that are new, unknown, or known to spread fake news, they should be labeled as such so that recipients can know more about the source. For established news websites, these can display relevant information and political biases to help recipients learn more at a glance.

Facebook’s new context feature on links helps users learn more about the source of information without having to leave the platform

Being able to get relevant source information without having to leave WhatsApp is a huge advantage. Reducing the barrier to learning more about where a link is from encourages people to do a quick check and allows them to contextualize the news they receive.

Suggest related searches to verify information

Double tapping on the message should surface quick links to launch Google searches on some phrases or topics contained within the message. Take for example this viral message claiming, among other things, that lime and lukewarm water before breakfast is a thousand times more effective than chemotherapy, allegedly according to the Maryland College of Medicine.

The text of the widely circulated hoax message

Double tapping on this message might surface launches into searches like “cancer lime lukewarm water” or “cancer Maryland College of Medicine”. The top results for these searches debunk this myth and provide links to reliable, trustworthy information on the topic:

A number of top results for this simple search yield links debunking this myth. Easy access to suggested searches like this can help users do some quick research with just a tap or two

What’s great about this solution is that WhatsApp can train these machine learning and natural language processing models and have them run on the user’s device, without necessarily having to send or store the decrypted message back to WhatsApp’s central servers. It can periodically make these models smarter with app updates, but this solution would not require it to have direct access to users’ messages and instead simply rely on edge computing. Edge computing simply means that the processing of the data happens on the device rather than in the cloud; this is ideal for WhatsApp because it allows it to continue to position itself as a secure, private messaging platform.

Similar machine learning solutions could be added to extract text from forwarded images, screenshots, and videos and provide the same sorts of quick launches to more information from them. lot of forwarded messages in WhatsApp come in multimedia form, and expanding the solution from links and text to images and videos would make this solution more robust.