People who break Facebook rules would be restricted from using the live-stream service for set periods of time.

Facebook has said it will introduce a new set of rules around its live-streaming feature as it increases efforts to combat hate speech aimed at curbing online violence in the aftermath of the mass shooting in New Zealand.

The social media platform said on Tuesday that it was introducing a “one-strike” policy to Facebook Live, where people who break the rules would immediately be restricted from using the live-streaming service for set periods of time – for example 30 days – starting on their first offence.

Before today, content that violated its policies – such as hate speech, or terrorist activity – was taken down and in some cases users were banned from Facebook altogether if they continued to commit such offences.

190509155258957

“We recognise the tension between people who would prefer unfettered access to our services and the restrictions needed to keep people safe on Facebook.” Guy Rosen, Facebook’s vice president of integrity, said in a blog post.

“Our goal is to minimise the risk of abuse on Live while enabling people to use Live in a positive way every day,” he added.

The platform plans on extending these restrictions to other areas over the coming weeks, beginning with preventing those same people from creating ads on Facebook.

Protecting Facebook Live from Abuse and Investing in Manipulated Media Research https://t.co/ubnt5TDfUc — Facebook Newsroom (@fbnewsroom) May 15, 2019

The company did not specify which offences were eligible for the one-strike policy or how long suspensions would last, but a spokeswoman said it would not have been possible for the gunman to use Live on his account under the new rules.

Facebook’s vice president also said these new rules require “technical innovation” to stay ahead of the media manipulation the company experienced after the Christchurch attack when some people modified the video to avoid detection in order to repost it after it was taken down.

“To that end, we’re also investing $7.5m in new research partnerships with leading academics from three universities, designed to improve image and video-analysis technology,” Rosen said.

The new decision comes as national leaders from around the world gather for an Online Extremism Summit in Paris on Wednesday.

The initiative, known as the “Christchurch call”, was pushed by New Zealand’s Prime Minister Jacinda Arden after a white supremacist killed 51 people in attacks on two mosques in New Zealand, live streaming the graphic footage via Facebook’s Live service.

“There is a lot more work to do, but I am pleased Facebook has taken additional steps today alongside the call and look forward to a long-term collaboration to make social media safer by removing terrorist content from it,” Arden said.

Representatives from Facebook, Alphabet Inc’s Google, Twitter Inc and other tech companies are expected to take part in the meeting, although Facebook Chief Executive Mark Zuckerberg will not be in attendance.