LONDON -- Hours after the third terrorist attack in the U.K. in three months, British leaders on Sunday escalated their criticism of Silicon Valley, calling for international regulations to hinder extremists who use cyberspace to spread their message and recruit supporters.

Continue Reading Below

"We cannot allow this ideology the safe space it needs to breed," British Prime Minister Theresa May said Sunday, the morning after a London attack killed at least seven people and injured scores more. "Yet that is precisely what the internet and the big companies that provide internet-based services provide."

Mrs. May said Sunday that Britain must work with other democracies to "reach international agreements" to regulate cyberspace to prevent terrorism planning. Her statement ratcheted up already critical remarks her cabinet members made in the wake of a March attack, also in London, that killed five people near Parliament. Saturday's London attack came 12 days after a suicide bomber killed 22 people outside a concert in Manchester, England.

In lambasting internet giants such as Facebook Inc. and Twitter Inc., the U.K. government joined Washington and other capitals that have said they don't do enough to battle extremists.

Mark Mitchell, New Zealand's defense minister, Sunday called the ability of terrorists to use social media and the internet to rally supporters a "clear and present security threat to us all." That view was driven home by the attacks in London, he said in Singapore at the International Institute for Strategic Studies' annual Shangri-La Dialogue. New Zealand is part of a close intelligence-sharing partnership with the U.K., U.S., Australian and Canada.

Technology giants have struggled for two decades over how -- and how much -- to curb the spread of undesirable content, from pirated music in the 2000s to false news reports in recent months. Governments ramped up pressure on them to crack down on online terrorist propaganda in the wake of terrorist attacks in Europe and the U.S. in 2015.

In each instance, the challenge remained the same: Tech executives must balance their desire to help fight a common enemy -- like terrorists -- with Silicon Valley's libertarian values in protecting free-speech rights of most internet users.

Apple Inc. resisted U.S. authorities' request for help to unlock an iPhone owned by Syed Rizwan Farook, the gunman who with his wife killed 14 people in San Bernardino, Calif., in December 2015. Authorities unlocked the phone without the company's assistance.

Many tech companies say they already work hard to police their platforms for terrorist content, and cooperate with judicial and police investigations. When it comes to propaganda, Alphabet Inc.'s YouTube, Facebook, Twitter and Microsoft Corp. all agreed last year to create a common database of identifiers of terrorist images to speed up flagging and removal of propaganda videos.

Twitter said it suspended 376,890 accounts in the second half of 2017 for promoting terrorism. Twitter said it identified almost two-thirds of those itself, with less than 2% accounts shut down because of government requests.

Last week, the European Union's executive arm cheered such efforts, saying that each of the companies had boosted removal of illegal content, including terrorist propaganda, 59% of the time when it was flagged for review, up from a rate of 28% six months ago.

Tech firms have also started using tools born in internet advertising to try to nudge internet users at risk of radicalization away from becoming terrorists. They have funded efforts to buy ads that target potential radical recruits, directing them to content that shows horrors of life under Islamic State, or other messages from the Muslim community disputing the terrorist exhortations from Islamic radicals.

A spokesman for Alphabet Inc.'s Google said: "Our thoughts are with the victims of this shocking attack, and with the families of those caught up in it. We are committed to working in partnership with the government and NGOs to tackle these challenging and complex problems."

Twitter's U.K. head of public policy, Nick Pickles, said "we continue to expand the use of technology as part of a systematic approach to removing this type of content "

"Using a combination of technology and human review, we work aggressively to remove terrorist content from our platform as soon as we become aware of it -- and if we become aware of an emergency involving imminent harm to someone's safety, we notify law enforcement," Simon Milner, director of policy at Facebook said in a statement.

British leaders say the internet companies can do more.

"Companies should be fined if they are not taking down jihadi propaganda or extremist propaganda, which should be removed because it is illegal," Yvette Cooper, a Labour politician who headed the Home Affairs Select Committee, told the BBC on Sunday.

Pete Burnap, director at Cardiff University's Social Data Science Lab said "there are difficult conversations to be had around what is and is not acceptable in different countries -- where is the line between free speech and remarks that are likely to divide communities and isolate individuals." While technology companies have taken steps to crack down on extremist accounts, he said more effort needs to be placed on understanding their impact on communities.