Social media platforms like Facebook, Twitter and Google must do more to stop Russian trolls from disrupting next May’s European election, according to the European Commission.

“We need to see an improvement in how we detect and call out disinformation,” Commissioner for Security Julian King told reporters in Brussels on Wednesday. “We need to see the internet platforms step up.”

But even as the Commission presented its Action Plan on Disinformation, a series of measures to protect European elections from fake news, analysts warned the effort is unlikely to be enough to stop foreign actors from distorting the vote.

“Looking at the level of threats, the presence of foreign influencing in Europe and new technologies like artificial intelligence and deep fakes, this is not nearly enough to fix the issue,” said Fabrice Pothier, senior adviser at the Transatlantic Commission on Election Integrity.

At the core of the Commission’s measures is a call on social media companies to provide the institution with monthly reports from January to May, detailing who has bought political advertisements, who is behind fake news campaigns and who is spreading disinformation through bot accounts and so-called troll farms.

The measures are unlikely to be enough to stop fake news, according to analysts working in the field.

The Commission also asked national authorities to monitor disinformation and share their findings with other EU capitals through a “rapid alert system” that would be set up by March 2019 and would warn countries and media about a wave of fake news reports. “We’re not asking anyone else to judge if a piece of information is true or false,” said King “What we are saying is that we can shine a light on the provenance of that information.”

The measures are unlikely to be enough to stop fake news, according to analysts working in the field.

Governments have underinvested in the problem and social media platforms have been reluctant to share their data with independent fact-checkers or election monitors.

Facebook and Twitter released ad transparency tools months ahead of the U.S. midterm election in November, to provide information about who is spending money on political ads. The two social media giants, together with Google and Mozilla, also committed to report on disinformation and roll out technical tools to monitor the issue when they signed up to the EU’s Code of Practice on Disinformation in September.

But the companies have been slow in rolling out fully fledged tools across the EU. Behind the scenes, executives have said they struggle with the patchwork of electoral laws.

It makes it hard for Europe’s authorities and independent fact-checkers to help out.

“I can’t independently research or verify what is happening,” said Ravi Vatrapu, a social data scientist and professor at the Copenhagen Business School who chaired a group of fact-checkers, researchers and media experts that was asked by the Commission to scrutinize its work on disinformation. “We have to ask the platforms [for information] but they haven’t opened up to us. If anything, they’ve completely closed down.”

Fighting fake news is also complicated by the difficulties in differentiating between malicious fake news campaigns and genuinely misinformed free speech.

“You’ll have people spreading disinformation that are corrupt, but also people who genuinely believe it to be true,” said Jakub Kalensky, senior fellow at the Atlantic Council think tank who previously led the disinformation team at the EU’s East StratCom office.

The Commission’s plan depends on national governments’ agreement to provide the funding to fight disinformation — many of which have been critical of the Commission’s attempts to crack down on fake news.

The EU’s strategic communication unit in the External Action Service has been chronically underfunded. This year it operated on a budget of €1.9 million, and the Commission hasn’t secured countries’ green light on its request to boost the annual budget to €5 million. In comparison, Russia is estimated to spend over €1 billion on disinformation, according to a Commission estimate.

Previous disinformation campaigns in Europe included efforts to distort the Catalonian independence referendum and to influence national elections in Europe. Often the false news reports are sent from Moscow, but servers in Venezuela and Iran have also been known to contribute to the efforts.

Still, many European governments have been reluctant to confront Russia directly, fearing economic and political repercussions.

“The question is whether politicians want to wake up now, or wake up after the elections with a headache" — Fabrice Pothier, senior adviser at the Transatlantic Commission on Election Integrity

“It is extremely sensitive,” said Kalensky, the former East StratCom official. Calling out Russia “might make consensus a bit more difficult, but if we don’t name the problem we can’t face it,” he said.

EU leaders meet in Brussels next week for a European Council summit at which they’re scheduled to discuss the action plan.

Many experts in the area believe that both governments and companies should do more, as time runs out before next year’s European election.

“Why not do a stress test, like we do for the banking sector for financial measures, so we see the vulnerabilities in member states?” said Pothier. “The question is whether politicians want to wake up now, or wake up after the elections with a headache,” he said.