Melissa Eddy and Mark Scott, New York Times, June 30, 2017

Social media companies operating in Germany face fines of as much as $57 million if they do not delete illegal, racist or slanderous comments and posts within 24 hours, under a law passed on Friday.

The law reinforces Germany’s position as one of the most aggressive countries in the Western world at forcing companies like Facebook, Google and Twitter to crack down on hate speech and other extremist messaging on their digital platforms.

But the new rules have also raised questions about freedom of expression. Digital and human rights groups, as well as the companies themselves, had opposed the law on the grounds that it placed limits on individuals’ right to free expression. Critics also said the legislation shifted the burden of responsibility to the providers from the courts, leading to last-minute changes in its wording.

{snip}

“With this law, we put an end to the verbal law of the jungle on the internet and protect the freedom of expression for all,” Mr. Maas said. “We are ensuring that everyone can express their opinion freely, without being insulted or threatened.”

“That is not a limitation, but a prerequisite for freedom of expression,” he continued.

The law will take effect in October, less than a month after nationwide elections, and will apply to social media sites with more than two million users in Germany.

{snip}

The law allows for up to seven days for the companies to decide on content that has been flagged as offensive, but which may not be clearly defamatory or inciting violence. Companies that persistently fail to address complaints by taking too long to delete illegal content face fines that start at 5 million euros, or $5.7 million, and could rise to as much as €50 million.

Every six months, companies will have to publicly report the number of complaints they have received and how they have handled them.

{snip}

Facebook said on Friday that the company shared the German government’s goal of fighting hate speech and had “been working hard” to resolve the issue of illegal content. The company announced in May that it would nearly double, to 7,500, the number of employees worldwide devoted to clearing its site of flagged postings.

{snip}

Even in the United States, Facebook and Google also have taken steps to limit the spread of extremist messaging online, and to prevent “fake news” from circulating. That includes using artificial intelligence to automatically remove potentially extremist material and banning news sites believed to spread fake or misleading reports from making money through the companies’ digital advertising platforms.