The unelected executive branch of the European Union (EU) has ordered social media and tech firms like Facebook to delete content it considers “illegal” within one hour, as they ramp up efforts to censor the Internet.

The EU insists they are focusing on “terrorist” content. However, when Germany introduced a similar ‘delete in 24 hours’ law on January 1st this year, it resulted in satire and some right-wing opinions being removed.

Critics say the short period of time given for removal and the threat of escalation means tech firms are overly cautious and mistakenly deleting content, causing a wider chilling effect on free speech as people become wary of sharing legitimate views.

The EU’s latest recommendations are non-binding but could be taken into account by European courts and are designed to pressure the tech firms. Over the coming three months, the Commission will decide if “legislation” is needed to force tech firms into action.

Because terrorist content online is a grave risk to the security of Europeans, it must be treated as a matter of the utmost urgency.https://t.co/Li5Hwd6gPh #illegalcontent #SecurityUnion pic.twitter.com/cshAPzJDdw — European Commission (@EU_Commission) March 1, 2018

In their ultimatum issued on Thursday, the unelected European Commission insisted their recommendations applied only to illegal online material, including terrorist manuals, incitement to hatred, child sexual abuse images, and copyrighted content.

However, in a “fact sheet” about the Commission’s fight against “illegal online content”, also published Thursday, it said “hate speech” and “xenophobic or racist” speech would also be targeted, without defining these categories or clarifying criticism of mass migration and radical Islam would be allowed.

Facebook said in a statement: “We share the goal of the European Commission to fight all forms of illegal content. There is no place for hate speech or content that promotes violence or terrorism on Facebook.”

EDiMA, an industry association that includes Facebook, Google, and Twitter, said it was “dismayed” by the Commission’s announcement.

“Our sector accepts the urgency but needs to balance the responsibility to protect users while upholding fundamental rights — a one-hour turn-around time in such cases could harm the effectiveness of service providers’ take-down systems rather than help,” they said in a statement.

The European Digital Rights group also criticised the move for putting too much power into the hands of big tech firms to “regulat[e] the free speech of Europeans”.

They slammed the “short-cut” recommended by the Commision, as “it puts the focus on ‘voluntary’ measures by internet companies [and] bypasses democratic accountability”.

“Since May 2016, Facebook, Twitter, YouTube, and Microsoft have committed to combatting the spread of illegal online hate speech in Europe through a Code of Conduct,” the fact sheet explains.

In that code, the tech also firms promised to help the EU “criminalise” perpetrators as well as re-educate them by “promoting independent counter-narratives” that Brussels favours.

The move was branded “Orwellian” by MEPs at the time, and digital freedom groups promised to pull out of any further discussions with the Commission, calling the new policy “lamentable”.

Facebook Signs EU Pledge To Suppress ‘Hate Speech’ And Promote ‘Counter Narratives’ https://t.co/mNYZs7g13R pic.twitter.com/0BjEV9fYSh — Breitbart London (@BreitbartLondon) May 31, 2016

The Commission also revealed this Thursday that “under the Code of Conduct on Countering Illegal Hate Speech Online, internet companies now remove on average 70 [per cent] of illegal hate speech notified to them and in more than 80 [per cent] of these cases, the removals took place within 24 hours”.

They added: “However, illegal content online remains a serious problem with great consequences for the security and safety of citizens and companies, undermining the trust in the digital economy.”

The EU’s Vice-President for the Digital Single Market, Andrus Ansip, said: “Online platforms are becoming people’s main gateway to information, so they have a responsibility to provide a secure environment for their users. What is illegal offline is also illegal online.

“While several platforms have been removing more illegal content than ever before – showing that self-regulation can work – we still need to react faster against terrorist propaganda and other illegal content which is a serious threat to our citizens’ security, safety and fundamental rights.”