Company, along with Google and Facebook, is under pressure to prevent interference in the 2020 US elections

Twitter will begin to label and in some cases remove doctored or manipulated photos, audio and videos that are designed to mislead people.

The company said on Tuesday that the new rules prohibit sharing synthetic or manipulated material that’s likely to cause harm. Material that is manipulated but isn’t necessarily harmful may get a warning label.

Under the new guidelines, the slowed-down video of the House speaker, Nancy Pelosi, in which she appeared to slur her words could get the label if someone tweets it out after the rules go into effect 5 March. If it was proven that it also causes harm, Twitter could also remove it.

Twitter to ban all political advertising, raising pressure on Facebook Read more

But deciding what might cause harm could be difficult to define, and some material will probably fall into a gray area.

“This will be a challenge and we will make errors along the way – we appreciate the patience,“ Twitter said in a blogpost. “However, we’re committed to doing this right.”

Twitter said it considers threats to the safety of a person or a group serious harm, along with risk of mass violence or widespread civil unrest. But harm could also mean threats to people’s privacy or ability to freely express themselves, Twitter said. This could include stalking, voter suppression and intimidation epithets and “material that aims to silence someone”.

Google, Facebook, Twitter and other technology services are under intense pressure to prevent interference in the 2020 US elections after they were manipulated in four years ago by Russia-connected actors.

On Monday, Google’s YouTube clarified its policy around political manipulation, reiterating that it bans election-related “deepfake” videos. Facebook has also been ramping up its election security efforts.

As with many of Twitter’s policies, including those against hate speech or abuse, for instance, success will be measured in how well the company enforces it. Critics say even with rules in place, enforcement can be uneven and slow. This is likely to be especially true for misinformation, which can spread quickly on social media even with safeguards in place. Facebook, for instance, has been using third-party factcheckers to debunk false stories on its site for three years. While the efforts are paying off, the battle against misinformation is far from over.

Twitter said it was committed to seeking input from its users on such rules. Twitter said it posted a survey in six languages and received 6,500 responses from around the world. According to the company, the majority of respondents said misleading tweets should be labeled, though not everyone agreed on whether they should be removed or left up.