The Iowa caucuses are still eight months away, but the Facebook primary has already begun.

President Donald Trump has poured at least $4.8 million into the platform since the beginning of the year, nearly as much as his top five Democratic challengers combined. Campaign manager Brad Parscale has described his approach as “shock and awe.”

If the 2016 election revealed Facebook’s power to influence elections, the millions spent by the presidential field so far this year suggest it was merely a warm-up for what’s coming next. And yet, researchers say Facebook has weakened or disabled certain tools they use to track political ads and content across the platform, just as users are about to be hit with a tsunami of political content.

“The reality is that Facebook has bent over backward to make it hard to study how the platform works,” said Gennie Gebhart, associate director of research at the Electronic Frontier Foundation.

Facebook has rolled out two key tools to increase transparency in response to Russian influence operations and the Cambridge Analytica data breach. In May 2018, Facebook unveiled the first version of its Ad Library, which now holds all ads purchased on the platform for seven years’ time. This March it added a publicly available API for political ads, which allows for bulk data collection.

But academics who use those tools say they’ve been limited in the following key areas:

Facebook has cracked down on the use of “scraping” tools on its Ad Library, making it more difficult to collect data in bulk and forcing researchers to search the huge database manually.

Facebook limited the number of searches an account can make in the Ad Library; researchers can’t keep up with the number of new ads flooding in.

The company curbed the use of browser plugins that allow users to share with researchers the reasons why they see a particular ad.

Facebook’s API for political ads still doesn’t include the videos or photos in ads or show how advertisers are targeting users.

Data collection is limited by individual logins, which Facebook can approve or deny. It is also impossible to tell whether the dataset is truly comprehensive.

Taken together, the changes over the past year have stymied efforts to track advertising and political messaging on Facebook, making it barely more transparent than it was in 2016, when it was used as a key tool for Russian information warfare.

It means candidates, regulators, and the public have less information about political ads on Facebook than TV, radio, and other media, which don’t use the same surveillance or targeting technology as Facebook. Yet those simpler formats are required by law to disclose their ad buyers and list them in a public file.

The Ad Library and API are Facebook’s attempt to pre-empt such requirements. Their prototypes were rolled out in the months following last year’s Cambridge Analytica scandal, which revealed how a rogue academic helped a Trump-aligned data firm improperly access the information of up to 87 million users.

Damon McCoy was among the first researchers to use early versions of both tools. His team at New York University began scraping the ad archive, which initially included only political ads, in the weeks after it rolled out in May 2018.

The huge trove of information yielded a significant finding: President Donald Trump shoveled far more cash into Facebook that month than any other political account. A Facebook representative even lauded the revelation to The New York Times as evidence of a new era of transparency.

"But then, a few weeks later, they began making changes to the archive’s code to basically block scraping,” McCoy said, adding that his contacts at the company cited data security concerns. “Clearly, Facebook was intending to make the data public, but in a limited fashion.”

Within months, Facebook did another u-turn, granting McCoy’s team highly coveted access to a beta version of the archive’s API, which purported to allow the programmatic data collection that was off-limits in the ad archive. The API was glitchy and confined largely to text content, McCoy said, with strict search limits and available information often changing. An NDA prevented his team from publicizing any underlying data.

Since March, when Facebook unveiled its publicly available API and expanded Ad Library, which now includes non-political ads, McCoy and his colleagues have continued to struggle. The team is waiting on additional logins to the API, which caps data queries for each individual account, and constantly redesigning its scraping tools. That bulk collection of additional information from the Ad Library can risk violating Facebook’s Terms of Service.

“I don’t know that we’ll be able to keep up with the volume when the next election hits”

“Through experimentation, we have hit upon things that are currently working,” McCoy said. “But we have to go much slower then we used to go, to a point that I don’t know that we’ll be able to keep up with the volume when the next election hits.”

Criticism of the new tools has been mounting in recent weeks as the company has unveiled the first glimpses of its “privacy-focused vision” and as debate over tech regulation heats up on Capitol Hill. In late April, the Mozilla Foundation publicly criticized the API for constraining data queries to keyword searches, imposing strict limits on data collection, and lacking engagement data, such as likes or shares.

“Facebook hasn’t demonstrated a history of following through on commitments, particularly on political ad transparency,” Ashley Boyd, Mozilla’s vice president of advocacy, told VICE News.

What’s more, neither of the highly touted tools includes information on how advertisers like Trump target specific segments of users based on the personal data Facebook collects. Company officials claim that the vague data currently offered on audience makeup and reach offers more meaningful insights than targeting criteria. Researchers overwhelmingly disagree.

“It’s the key to discriminatory ad practices in the sense of demographic discrimination—racist or gendered or other discrimination,” said Hamsini Sridharan, project director for MapLight, a nonprofit that tracks money’s influence in politics. “But it also allows politicians to basically tell one group one thing and another group another — and do that a thousand or million times over.”

Hotbed for misinformation

Those fears aren’t confined to political advertising. It’s still not clear why News Feed boosts certain stories, how advertisers target Facebook users so specifically, or the ways misinformation and extremist messages can spread so quickly on Instagram or WhatsApp.

Experts worry that, once again, Facebook holds the power to deepen the divisions that already exist in American culture — and won’t reveal how it wields that power.

Kremlin-backed actors took advantage of that system by serving Americans divisive ads and posts in the leadup to 2016. While Facebook officials say its countermeasures prevented a similar influence campaign in last year’s midterms, FBI Director Christopher Wray warned at a Council of Foreign Relations event in April that “we’re very much viewing 2018 as just kind of a dress rehearsal for the big show in 2020.”

Any such influence campaign — foreign or domestic — could come in a media environment that’s even more complicated than four years ago.

“Overall, political advertising is becoming less transparent,” said John Wonderlich, executive director of the transparency-focused Sunlight Foundation. “More advertising is moving online, where it’s not regulated outside these self-imposed policies. And the entities paying for ads are often pass-through entities to shield the actual interests behind it.”

Facebook is not legally liable for the content that appears on its platform. And lawmakers have never required tech firms to disclose political advertisers as they do for TV or radio.

This month, however, a bipartisan coalition of lawmakers reintroduced the Honest Ads Act. The bill would require tech companies like Facebook to publicize the same advertiser data as those older media formats, catalogue all requested ad buys, and file descriptions of target audiences. Facebook has at times supported the bill, and at others lobbied against it. It’s unlikely to pass a GOP-controlled Senate.

“It’s going to hurt our democracy, because all the dirty money is going to go online”

“It’s going to hurt our democracy, because all the dirty money is going to go online,” Sen. Amy Klobuchar (D-Minn.) said at a Senate Judiciary Committee hearing on digital advertising last week. “And we are literally not going to know what the ads are, because there’s no requirements that they be archived in law.”

Facebook officials have defended their transparency efforts for both political and non-political ads, while acknowledging that they’re a work in progress. Spokespeople also told VICE News that they will never be able to totally satisfy researchers — who want all possible data at their fingertips — without allowing a repeat of Cambridge Analytica.

“With any new undertaking we're committed to taking feedback, and learning and improving our tools to make them more useful,” a company spokeswoman said. “We know we can’t keep elections safe on our own and welcome researcher, journalist and watchdog groups to analyze these ads.”

Some critics warn, however, that Facebook’s ad business relies on keeping its targeting tools secret.

“As soon as you take away Facebook’s ability to shield the targeting that it engages in, you take away the business model,” said Dipayan Ghosh, a Shorenstein Center fellow at Harvard and former policy adviser at Facebook and the Obama White House. “If we see the ways Facebook is shuffling us into audience segments and subjecting us to media and information, it will reflect enormous amounts of disparate impacts. That’s something the company doesn’t want to entertain.”

Secrecy as business model

Despite such skepticism, there have been some efforts from within Facebook to allow outsiders to peek under the hood. Last year, just weeks after the Cambridge Analytica news broke, the company trumpeted a unique partnership that would provide researchers with internal data to study the company’s impact on elections in the U.S., Brazil, Italy, and elsewhere.

After a yearlong wait, Facebook will finally begin supplying a small group of academics with access to secure datasets in June. A group called Social Science One will act as a mediator between the researchers and the company. And the Social Science Research Council, which raised money for the project, will oversee a peer review process outside Facebook’s control.

“It’s a complicated partnership that required leaps of faith and established trust,” said Alondra Nelson, a Columbia University sociology professor and president of the Social Science Research Council. “In the end, it is so important for us as a global society to have access to this kind of data.”

The project has been met with skepticism by some outside researchers. For all of Facebook’s built-in privacy protections, it’s offering no additional data on its ad business other than what is already available through the ad archive API. Researchers involved with the project acknowledge that the other datasets are also narrow, showing publicly available engagement metrics on organic posts and URLs shared by 100 unique users with public privacy settings.

“This is not something companies do every day,” said Chaya Nayak, Facebook’s strategic initiatives manager, who helped spearhead the project. “What’s really important about this is that the pipeline has been opened.”

Most researchers have instead been forced to find other ways to analyze Facebook advertising, like buying up ads and reverse-engineering their target audience. Such backdoor routes have found ads that aren’t included in the company’s archive, suggested scattershot verification of ad buyers, and revealed discriminatory targeting practices in politics, housing, and other areas.

But Facebook also polices some of those efforts in seemingly selective ways. In late January, the company effectively shut down a ProPublica browser plugin that aggregated volunteers’ ad targeting data — months after a company official reportedly told the outlet that it served “an important purpose.” Facebook officials say that they take such actions to prevent behavior that goes against the company’s Terms of Service, not in response to specific research efforts.

Analysts acknowledge the tradeoffs around user privacy Facebook has to make with such decisions. But the unsanctioned workarounds remain necessary because the company’s own tools are so limited, said Aaron Rieke, project manager at Upturn, a progressive policy organization that focuses on the tech industry.

“Facebook is a very sophisticated technology company,” added Rieke, who’s currently studying Facebook’s processes for delivering employment and home-sale ads. “Giving a basic way to search through ads shouldn’t be that hard.”