WhatsApp’s users will only be able to forward messages to 20 people, as the Facebook-owned messaging service attempts to crack down on the viral spread of hateful misinformation.

In India, where false rumours about child abduction spread virally over WhatsApp, leading to several vigilante murders over the past year, the new limit will be even stricter: each message can be forwarded just five times. In that country, where according to Facebook “people forward more messages, photos, and videos than any other country in the world”, WhatsApp is also removing the “quick forward” feature, a button that appears next to photos, videos and links. The previous forwarding cap, rarely hit by users, was more than 250.

“We’re horrified by the violence in India, and we’ve announced number of different product changes to help address these issues,” a WhatsApp spokesperson said. “It’s a challenge which requires an action by civil society, government and tech companies.”

More than 20 people have been lynched in India after being accused of child abduction in the last two months, according to media reports.

WhatsApp faces a very different challenge when it comes to tackling misinformation than that faced by public sites such as Facebook, Twitter and YouTube. Messages sent over the platform are end-to-end encrypted, which means that WhatsApp is technologically incapable of reading them – or performing any other scanning, automated or human-driven.

The company has made other changes in an attempt to slow the spread of viral falsehoods, such as making it clear that a forwarded message from a friend or family member did not originate with them.

On Sunday, Indian police arrested 25 people after a man was killed by a mob, in the latest case of a WhatsApp-rumour-fuelled lynching. Mohammad Azam, 27, was attacked by a group of 2,000 people in southern Karnataka on Friday, after a rapid-fire WhatsApp rumour spread accusing him of attempted child abduction. Three police were injured in rescue attempts. Two of Azam’s friends were injured during the attack.

Across Facebook’s platforms, the company has gained a reputation for nimble responses to bad publicity. On Thursday, Facebook announced that moderators who comes across an underage account on any of their platforms that has been flagged for any reason, they will be able to put a hold on it if they have “a strong indication” that the user is underage.

The new policy is in response to a Channel 4 news report earlier this week in which undercover filming showed that moderators were instructed to ignore accounts clearly run by children, in contravention of Facebook’s policies, unless they had been flagged as underage by a Facebook user.