Image via Pexels

This month, a new law against hate speech will go into effect in Germany, fining Facebook, Twitter, YouTube, and other social media companies up to €50 million if they fail to take down illegal content from their sites within 24 hours of being notified. For more ambiguous content, companies will have seven days to decide whether to block the posts.

The rule is Germany’s attempt to fight hate speech and fake news, both of which have risen online since the arrival of more than a million refugees in the last two years.

Germany isn’t alone in its determination to crack down on these kinds of posts. For the past year, most of Europe has been in an intense and fascinating debate about how to regulate, who should regulate, and even whether to regulate illegal and defamatory online content.

ICYMI: Prominent journalists accused of sexual misconduct

Unlike the US, where we rely on corporate efforts to tackle the problems of fake news and disinformation online, the European Commission and some national governments are wading into the murky waters of free speech, working to come up with viable ways to stop election-meddling and the violence that has resulted from false news reports.

One way to think about the discussion of how to control misinformation/disinformation on social media is to divide people into two camps: the supply-side people and the demand-side people. The supply-siders feel there’s too much dis/misinformation flooding the internet, which is overwhelming real news and causing people to become exhausted and confused as they try to distinguish true information from false. They worry that the more times people read something, the more they believe it, even if it’s false and later discredited. Corrections don’t make much difference when people’s minds are already made up.

Sign up for CJR 's daily email

The demand-siders argue that fake news has always existed, and so there’s no reason to panic. Their primary interest lies with why people appear to be more susceptible to fake news now than they were previously.

People on the supply side want Facebook and Twitter to limit what they circulate and promote, and to stop allowing people to make money off producing and disseminating fake news. The demand-side people tend to think the responsibility lies more with society. Some groups, including Facebook and various foundations, want to fund more teaching of media literacy in schools so that news audiences will become more discerning consumers. A Facebook spokesman told CJR via email that the company is making efforts to promote journalism training and media literacy, but he declined to say how much funding is devoted to the project. “As a matter of long-standing policy,” he wrote, “we don’t discuss our spending in this regard.”

Others, including Facebook, have suggested labeling non-verified news in the hope that audiences will hesitate to circulate it. Many groups oppose hasty government responses that broaden censorship and liability online and could do long-term harm.

In the US, panels dedicated to these issues often end with media experts saying they don’t want government censorship or social media companies like Facebook to be responsible for corporate censorship. But if the government shouldn’t act, and private companies shouldn’t act, who is supposed to fix the problem?

More generally, many of the solutions proposed—such as more money for fact-checking and media literacy, notifying readers when content is fake, and increasing journalists’ engagement with their readers—seem piecemeal.

Other ideas, such as changing ownership structures or job-creation programs that could attack the roots of the alienation and disillusionment that make people susceptible to propaganda, would take years to implement and even then might not work.

European governments have decades of experience supporting free speech and media pluralism. This is the part of the world that gave us trusted institutions such as the BBC and Swedish Public Radio, the Deutsche Welle Akademie, and a plethora of groups that do media development as well as give free newspaper subscriptions to teenagers (France) and subsidies to newspapers. In Sweden, the government supports provincial newspapers to ensure diversity in the newspaper market. Austria has guidelines under which the government can subsidize non-dominant newspapers that provide political, economic, and cultural information. Norway is currently reviewing its decades-old government subsidy plans and exploring a minimum set price for newspapers.

ICYMI: The NYT tweet on books by women that “didn’t play well”

These government efforts stem from a broader consensus that diversity of opinion, high-quality information, and free expression are essential to a thriving democracy. This is partially why the European Commission as an institution supports “quality journalism in an age of media convergence.” Its efforts include funding a number of media-related NGOs, training workshops, and policy research. For officials working on these subjects, hate speech and problems of disinformation/misinformation are part of the broader Democracy project. Some believe that “disinformation campaigns are undermining the cohesion of the European project,” as Silvio Gonzato, director for strategic communications at the EU External Action Service, puts it.

But these laudable goals make action against issues like fake news and hate speech difficult. The offensiveness and destability that come hand in hand with disinformation campaigns incite fear. But that fear must be balanced against human rights and free expression laws. This hesitation is part of why European countries and the European Union have been slow to make sweeping changes. Even when they do take action, the European approach differs greatly from that of the US, where speech, even what Europeans would define as hate speech, is protected by the First Amendment.

Here are some reasons the European Union is taking a slow and piecemeal approach to legislating against fake news online:

There is a legal difference in Europe between hate speech and offensive speech and that affects what can be taken down.

Although EU members have all kinds of rules about hate speech, much of what we detest online is not actually illegal. “Fake news does not necessarily contain illegal hate speech or other illegal content,” says Louisa Klingvall, a policy officer at the European Commission in Brussels.

The lack of consensus within the EU constrains what can be done.

Some member countries don’t care about hate speech. Parts of their populations are severely anti-immigrant and care more about their right to say incendiary things online. And their leaders don’t want to stop them. Some of the most far-right European politicians have made their careers in the European Parliament (Marine Le Pen, Nigel Farage), and there have even been cases of EU MEPs insulting their own members, including, in 2013, an Italian senator making an offensive comment about a black government minister. On the opposite end of the spectrum, Sweden is recognized as a global leader on freedom of expression, while France has led fights to enforce global takedowns of content it finds inappropriate. The lack of agreement impedes legal action.

There are still things we don’t know about the spread and impact of mis/disinformation online in the US, and we know even less about Europe and the rest of the world.

Many of the major recent studies on the prevalence and spread of disinformation, such as those done by professors Yochai Benkler of Harvard University and Philip Howard of the University of Oxford, researched the US landscape. There is not enough information about hate speech and dis/misinformation in Europe. “There have been a few comparative studies about junk news consumption by German, French, and UK voters that broadly find the Germans are getting the least amount of junk, the British the most, and the French somewhere in between,” Howard says. “We know that the Russian government has disinformation campaigns targeted at several countries around Europe, but we have little sense of how effective they are.”

Related to this lack of information is the refusal by Facebook to share much of the audience data it collects. Companies like Facebook and Google are unwilling to open up the black box behind their search and content algorithms.

Facebook knows how to microtarget audiences, but it’s still not clear who is getting what information. Without knowing that, there is no way for academics to measure the impact of repeated exposure to fake news. In some ways, it’s analogous to the early academic research on the effect of Fox News. It took years before the presence of Fox was clearly correlated with Republican voting patterns.

“The most critical data set is granular Facebook data of the kind the company has, but doesn’t share with independent researchers,” Benkler says. “That’s data that would allow researchers to see whether someone was exposed to disinformation, or whether there was a particular disinformation campaign aimed at well-defined voting demographics in critical election districts.”

Facebook’s “advertising algorithms allow politically motivated advertisers to reach a purposefully selected audience,” Benkler says. “Unfortunately, the company provides no public record of the political advertisements it serves to users.” In other media, political candidates are required to declare their sponsorship and file copies with the US Federal Election Commission. But, Benkler says, “in the US election, for example, Trump spent $70 million on Facebook ads we’ll never see.”

Like the US, EU officials seem to hope social media companies will solve the problem themselves.

Like everyone else, part of what the EU has been doing is trying to get the companies to change what they allow to appear online. Predictably, the companies have been slow to act. EU officials have had repeated meetings with Facebook, Twitter, and Google, and in 2016 EU officials got the companies to agree to agree to new codes of conduct.

“This amounts to a form of self-regulation, which in many cases is seen as preferable to government regulation. However, any self-regulation by tech companies must be transparent, subject to independent oversight, and include some sort of path to remedy for those affected,” says Courtney Radsch, advocacy director of the Committee to Protect Journalists, who has been closely following European approaches.

Under the agreement made in 2016, the EU, with help from social media companies, has been funding training for designated NGOs to flag hate speech and alert Facebook and Twitter when they find something they think is illegal. Individual member countries also run Internet Referral Units for this purpose.

However, EU Commission studies released in December 2016 found that, six months after the agreement, the social media companies were only taking down 28 percent of flagged content and reviewing only 40 percent of queries within 24 hours. By June 2017, those numbers had risen to 59 percent of flagged illegal hate speech being taken down and 51 percent of notifications being reviewed within 24 hours. The rise in take downs meets European standards but, of course, worries NGOs and free-expression advocates who are afraid of overcompliance. Twitter lagged behind other companies in the percentage of take downs.

Many of the solutions proposed by the EU run counter to free speech and free expression laws/practices.

NGOs working on free speech issues don’t want to leave decisions with social media companies. They say it’s up to the courts to decide what should be taken down.

“There is more and more pressure on companies to do the job of public authorities,” says Maryant Fernández Pérez, senior policy advisor with the European Digital Rights Forum in Brussels. “We are afraid of privatized censorship.” In the US, human rights groups and technology companies have the both oppose having technology companies decide what should be removed. “It’s privatized law enforcement and speech restriction, which is always, and in and of itself, a problem. Even when intentions are good, the risk is that lawful speech will be restricted without judicial process,” warns Jens-Henrik Jeppesen, director for European Affairs at the Center for Democracy and Technology (CDT) in Brussels. CDT is a public interest group working on free expression and other technology policy issues. CDT receives funding from a range of foundations and companies, including Google and Facebook.

It’s not just ideas about hate speech that are different in the US and Europe. Europe is taking steps to reduce the market power of the big US tech firms and has handed down several antitrust rulings in recent weeks. EU Competition Commissioner Margrethe Vestager has said repeatedly that tax avoidance by Facebook and Google gives the companies unfair advantages over European firms. It’s likely that more laws regulating the big tech monopolies will come from Europe if not the US. In Brussels, it is expected that other European countries will likely follow the German example and pass laws requiring Facebook to take down hate speech and possibly other forms of objectionable content. Officials say the UK and France will be next.

Additional research by Anamaria Lopez. Thanks to Peter Micek and Courtney Radsch for their comments and Andrea Gurwitt for editing.

Has America ever needed a media watchdog more than now? Help us by joining CJR today

Anya Schiffrin is the director of the media and technology specialization at Columbia University’s School of International and Public Affairs. She is a PhD candidate at the University of Navarra.