Facebook, Twitter, Microsoft, and YouTube today agreed to European regulations that require them to review "the majority of" hateful online content within 24 hours of being notified — and to remove it, if necessary — as part of a new "code of conduct" aimed at combating hate speech and terrorist propaganda across the EU. The new rules, announced Tuesday by the European Commission, also oblige the tech companies to identify and promote "independent counter-narratives" to hate speech and propaganda published online.

Hate speech and propaganda have become a major concern for European governments following terrorist attacks in Brussels and Paris, and amid the ongoing refugee crisis, which has inflamed racial tensions in some countries. Facebook has been working with the German government to more proactively combat racist or xenophobic content, after facing initial criticism from the country's justice minister. Facebook, Twitter, and Google also previously agreed to remove hate speech from their platforms within 24 hours in Germany.

An "urgent need"

The EU has been pushing for web companies to combat terrorist propaganda, as well, with some developing their own material to counter efforts from groups like ISIS. The code of conduct announced today marks the first effort to unify policy on online hate speech across the EU.

"The recent terror attacks have reminded us of the urgent need to address illegal online hate speech," Vĕra Jourová, the EU commissioner for justice, consumers, and gender equality, said in a statement Tuesday. "Social media is unfortunately one of the tools that terrorist groups use to radicalize young people and to spread violence and hatred."

Europe's crackdown on hate speech has put tech companies in a difficult situation, as governments push them to assume more responsibility in policing illegal content, and there are concerns over free speech, and how the code of conduct was structured. European Digital Rights (EDRi), a Brussels-based advocacy group, criticized the code of conduct in a post published Tuesday, saying that it delegates tasks to private companies that should be carried out by law enforcement. EDRi and Access Now, an international rights group, said in a joint statement that they would withdraw from future discussions, saying that civil organizations were "systematically excluded" from negotiations over the code of conduct.

"In short, the 'code of conduct' downgrades the law to a second-class status, behind the 'leading role' of private companies that are being asked to arbitrarily implement their terms of service," the joint statement reads. "This process, established outside an accountable democratic framework, exploits unclear liability rules for companies. It also creates serious risks for freedom of expression as legal but controversial content may well be deleted as a result of this voluntary and unaccountable take down mechanism."

In statements on the code of conduct, all four tech companies said they remain committed to cracking down on illegal hate speech while still allowing for the free flow of information across their platforms.

"We’re committed to giving people access to information through our services, but we have always prohibited illegal hate speech on our platforms," Lie Junius, Google's head of public policy and government relations, said in a statement. "We have efficient systems to review valid notifications in less than 24 hours and to remove illegal content. We are pleased to work with the Commission to develop co- and self-regulatory approaches to fighting hate speech online."

Update May 31st, 9:08AM ET: This article has been updated to include a joint statement from EDRi and Access Now.