The Facebook logo is pictured on the sidelines of a press preview of the so-called "Facebook Innovation Hub" in Berlin | Tobias Schwarz/AFP via Getty Images digital politics Facebook’s refusal to share data undermines global response to fake news The social network’s refusal to give access to its treasure trove of data is harming countries’ response to fake news.

LONDON — For a social network that boasts about how much its 2.2 billion users swap photos, videos and messages online, Facebook isn't a big fan of sharing.

And that's becoming a serious problem.

The company refuses to give researchers, academics and journalists access to data it collects on people's individual Facebook pages. That makes it almost impossible to track, analyze and predict how waves of online misinformation — last week's Italian election again showed how such fake news could circulate rapidly among voters — are spread on the world's largest social network.

By failing to open its digital doors, Facebook is doing itself, and the wider public, a disservice.

Disinformation, spread by homegrown activists and foreign actors, is now part of almost every country's election cycle. And without a full understanding of what is posted and circulated on the social networking giant, policymakers in Brussels, London and Washington are left fighting blind when figuring out how best to tackle online misinformation just as people's trust in what they read online has hit an all-time low.

Without access to Facebook’s private data, fake-news researchers must rely on imperfect proxies

It's about transparency — and Facebook's lack of it.

"Right now, we don't know anything that goes on inside Facebook," said Alexandre Alaphilippe, co-founder of the EU DisinfoLab, a Brussels-based nonprofit organization that tracked the spread of fake news during the recent French and Italian elections. "All of this content is concentrated in a private black box."

Facebook rejects claims that it's not doing its part in tackling fake news. The company says its strict privacy rules (arguably based on past painful run-ins with European data protection watchdogs) mean that it can't just hand over people's data to any researcher or journalist that asks for it.

The social media company also said it was working with some researchers on projects that use anonymized data, even if it has balked at providing one-off access to other academics, according to several people who have asked for it.

Without such independent research, countries' lawmakers must rely on the company's own analysis about what type of misinformation is spreading in this no-go zone

Facebook officials cite fears that sharing private data would create a precedent in which others (read: government agencies or unfriendly private actors) could also come calling.

“We want to work with the academic community to continue to understand the impact of our platform while making sure we are protecting people's privacy," Lena Pietsch, a Facebook spokeswoman, said in a statement.

But privacy concerns are an imperfect excuse. What researchers are looking for aren't the names, likes and birthdays of Facebook users. Instead, they're asking for anonymous datasets to analyze trends about how online content is produced and shared among groups of Facebook users — something that so-called data-brokers, or companies that sell users' digital data, already get through existing commercial agreements with the social network.

Currently, Facebook allows outside groups to analyze data from so-called 'public' pages — those created by politicians, brands and companies to share posts on the platform, which can, by default, be read by anyone online.

But it offers no access to anonymized data for individuals' "private" Facebook pages (the ones that you and I use to stay in touch with friends and family), which — importantly — represent the lion's share of online activity where most of the misinformation is created and shared.

"There's a fundamental tension here, but we need greater transparency to shed a light on what Facebook's algorithms do," said Dipayan Ghosh, a former Facebook privacy adviser, who has become a staunch critic of the company while working at New America, a think tank in Washington, DC.

Without access to Facebook's private data, fake-news researchers must rely on imperfect proxies, including other social media sites, to garner any insight into how misinformation is spread on Facebook's network. Many, including the Atlantic Council's Digital Forensic Research Lab, rely on Twitter, mostly because people's posts on that platform are almost always open to the wider world.

That makes sense in the United States, where 68 million Americans (or 20 percent of the population) regularly tweet. But outside the U.S., such a reliance on Twitter (whose international reach is marginal, at best) can skew results and miss important trends in how fake news is circulated.

What happened in Italy

Take last week's Italian election.

Facebook has roughly 25 million users in the country, while Twitter has less than 2 million, according to industry estimates. When a false report about potential ballot tampering in Sicily started to spread online on voting day, Twitter users retweeted the misinformation roughly 1,000 times, according to an analysis by EU DisinfoLab.

Yet on Facebook, the same story was shared more than 18,000 times — and that's only on public Facebook pages. How that misinformation spread within Facebook users' private pages (and, notably, who helped to circulate it) remains unknown.

The same story has played out again and again across multiple elections, from the 2016 U.S. presidential campaign to last September's nationwide vote in Germany. Researchers are left scrambling to use unwieldy Twitter data or proxy Facebook public statistics to gauge what was happening on people's private pages.

A start would be for Facebook to offer public interest access to all of its data to verified researchers.

Without such independent research, countries' lawmakers must rely on the company's own analysis about what type of misinformation is spreading in this no-go zone.

Last year, for instance, the social networking giant eventually told U.S. politicians that roughly 126 million people may have seen Russian-linked posts ahead of the 2016 election, or a more than ten-fold increase on Facebook's initial estimates. In the United Kingdom, Facebook also said that Kremlin-backed groups spent less than £1 on digital advertising connected to the country's 2016 referendum to leave the European Union — a figure that local lawmakers derided as laughably small.

"They could be doing things more useful to explain how the platform is used in campaigns," said Sam Jeffers, co-founder of WhoTargetsMe, a British nonprofit organization that relies on people downloading software onto their computers so the group can track political advertising on individuals' private Facebook pages. "We want to give people more transparency over the types of political messages that they're seeing."

A start would be for Facebook to offer public interest access to all of its data to verified researchers.

Such tools, known in the industry as "APIs," already exist for the social network's public pages. It wouldn't take much to provide anonymized data on online disinformation so that academics and policymakers can get a handle on what really is going on within Facebook's ever-expanding universe.

So far, the social networking giant has opposed such steps. But if it continues to put up digital roadblocks, Facebook may soon find that countries' lawmakers — many of whom have been the target of fake news campaigns — will take action into their own hands, forcing the company to cough up the data or face the regulatory consequences.

If that happens, Facebook will only have itself to blame.

Mark Scott is chief technology correspondent at POLITICO.