In the wake of Britain’s third major attack in three months, Prime Minister Theresa May called on governments to form international agreements to prevent the spread of extremism online.

What are technology companies doing to make sure extremist videos and other terrorist content doesn’t spread across the internet?

Internet companies use technology plus teams of human reviewers to flag and remove posts from people who engage in extremist activity or express support for terrorism.

Google, for example, says it employs thousands of people to fight abuse on its platforms. Google’s YouTube service removes any video that has hateful content or incites violence, and its software prevents the video from ever being reposted. YouTube says it removed 92 million videos in 2015; 1 percent were removed for terrorism or hate speech violations.

Facebook, Microsoft, Google and Twitter teamed up late last year to create a shared industry database of unique digital fingerprints for images and videos that are produced by or support extremist organisations. Those fingerprints help the companies identify and remove extremist content. After the attack on Westminster Bridge in London in March, tech companies also agreed to form a joint group to accelerate anti-terrorism efforts.

Twitter says in the last six months of 2016, it suspended a total of 376,890 accounts for violations related to the promotion of extremism. Three-quarters of those were found through Twitter’s internal tools; just 2 percent were taken down because of government requests, the company says.

Facebook says it alerts law enforcement if it sees a threat of an imminent attack or harm to someone. It also seeks out potential extremist accounts by tracing the “friends” of an account that has been removed for terrorism.

Why are technology companies clashing with governments over extremist communications?

Since Edward Snowden’s 2013 disclosures about National Security Agency surveillance, several tech companies have started encrypting that is, scrambling them to thwart spies instant messages and other data so tightly that even the companies can’t read them. Governments are not happy about that.

After the 2015 mass shooting in San Bernardino, California, and again after the Westminster Bridge attack, the U.S. and U.K. governments sought access to encrypted messages exchanged by extremists who carried out the attacks. Apple and Facebook’s WhatsApp refused, noting that they didn’t hold the keys needed to unscramble such messages. Both governments eventually found other ways to get the information they wanted.

Some in government including former FBI Director James Comey and Democratic Sen. Dianne Feinstein of California have argued that the inability to access encrypted data is a threat to security. Feinstein has introduced a bill to force companies to give the government so—called “backdoor” access to encrypted data so that investigators could read messages on these services.

Shouldn’t tech companies be forced to share encrypted information if it could protect national security?

Weakening encryption won’t make people safer, says Richard Forno, who directs the graduate cybersecurity programme at the University of Maryland, Baltimore County. Terrorists will simply take their communications deeper underground by developing their own cyber channels or even reverting to paper notes sent by couriers, he said.

“It’s playing whack-a-mole,” he said. “The bad guys are not constrained by the law. That’s why they’re bad guys.”

Building backdoors into encryption could also weaken it in ways that hackers, criminals and foreign agents could exploit. That could potentially jeopardise all sorts of vital data, from personal communications and documents to bank accounts, credit card transactions, medical history and other information that people want to keep private.

But Erik Gordon, a professor of law and business at the University of Michigan, says society has sometimes determined that the government can intrude in ways it might not normally, as in times of war. He says laws may eventually be passed requiring companies to share encrypted data if police obtain a warrant from a judge.

“If we get to the point where we say, ‘Privacy is not as important as staying alive,’ I think there will be some setup which will allow the government to breach privacy,” he said.

Is it really the tech companies’ job to police the internet and remove content?

Tech companies have accepted that this is part of their mission. In a Facebook post earlier this year, CEO Mark Zuckerberg said the company was developing artificial intelligence so its computers can tell the difference between news stories about terrorism and terrorist propaganda. “This is technically difficult as it requires building AI that can read and understand news, but we need to work on this to help fight terrorism worldwide,” Zuckerberg said.

But Gordon says internet companies may not go far enough, since they need users in order to sell ads.

“Think of the hateful stuff that is said. How do you draw the line? And where the line gets drawn determines how much money they make,” he said.

Others say the focus on tech companies and their responsibilities is misplaced. Ross Anderson, a professor of security engineering at the University of Cambridge, says blaming Facebook or Google for the spread of terrorism is like blaming the mail system or the phone company for Irish Republican Army violence 30 years ago. Instead of working together to censor the internet, Anderson says, governments and companies should work together to share information more quickly.

Former Secretary of State John Kerry also worries about placing too much blame on the internet instead of the underlying causes of violence.

“The bottom line is that in too many places, in too many parts of the world, you’ve got a large gap between governance and people and between the opportunities those people have,” Kerry said on NBC’s “Meet the Press.”