The attack in Christchurch inspired Ms. Ardern to push for international cooperation against online extremism. She has argued that a country-by-country approach will not work in an interconnected digital world. In addition to France, Britain, Canada, Jordan, Senegal, Indonesia, Australia, Norway, Ireland and the European Commission are also expected to sign the agreement.

Facebook, Google and Microsoft have also said they will sign. Twitter declined to comment.

In announcing the new restrictions on its live video service, Facebook said it was partnering with three universities — the University of Maryland, Cornell University and the University of California, Berkeley — in an effort to develop new technologies for detecting and removing troublesome images and videos from the internet.

Facebook and other companies were slow to identify and remove the Christchurch video — in part because the original had been edited in small ways as it passed across various services.

Through its new university partnerships — backed by $7.5 million in funding — Facebook said it would work on building technology that can detect images and videos that have been manipulated in subtle ways.

Over the past three years, Facebook and other social media giants have come under increasing pressure to identify and remove a wide range of problematic content, including hate speech, false news and violence.

The company has said that it is now using artificial intelligence to pinpoint many types of problematic content and that this technology is rapidly improving.

But A.I. doesn’t always detect some material, most notably hate speech and false news. And the attack in Christchurch showed the technology still has a long way to go when it comes to detecting violent images. Facebook also pays thousands of contract employees to scrutinize and remove problematic content.