The Facebook ad purchased by a page called “Angry Buckeyes” in December seemed ordinary at first: it cited a news article that claimed President Donald Trump’s tariffs on China could cost Americans.

But the ad actually belonged to an apparent cluster of pages that paid to promote similar messages, didn’t fully disclose their backers, and sought to influence voters in key battleground states – deficiencies that have cast fresh doubt on Facebook’s efforts to protect users from manipulation.

The discovery is one of several weaknesses uncovered by experts at New York University’s Tandon School of Engineering, who performed a security audit of Facebook’s online ad archive between May 2018 and June 2019. Their conclusions point to myriad opportunities malicious actors may have had to exploit the platform’s powerful targeting tools while hiding their tracks, misleading users and evading Facebook’s enforcement.

In the years after Russian agents weaponized the social-networking platform as part of their sweeping efforts to sway the 2016 presidential election, Facebook developed verification measures designed to prevent foreign actors from purchasing political ads. It also undertook transparency initiatives that placed paid posts in a public archive. But researchers Laura Edelson, Tobias Lauringer and Damon McCoy found a series of defects that still could “enable a malicious advertiser to avoid accurate disclosure of their political ads,” as they wrote.

More than 86,000 Facebook pages ran at least one political ad that was not properly disclosed, according to the report. Facebook later caught and included these ads in its archive, but it remains unclear whether the company ever fully vetted nearly 80 per cent of the pages that paid to promote their messages in the first place.

Roughly 20,000 ads also had been purchased by “likely inauthentic communities,” according to the report, which they defined as clusters of pages that appear to be linked because they promoted the same or similar messages. That included businesses looking to advance their interests without clear fingerprints, for example, and more opaque entities that hawked potentially fraudulent insurance products. These ads touched on political themes, resulting in their being included in Facebook’s archive.

In one example, the Facebook page Angry Buckeyes, currently followed by 21,000 users, appeared to belong to a cluster of pages organised around topics including race, religion and other traits. Researchers discovered the link because some of the pages in the cluster ran identical ads. Nowhere, however, did the pages disclose their possibly shared roots. The lack of transparency troubled the trio of digital experts, who expressed fear that “the disinformation campaign” orchestrated by this cluster was “attempting to sway voters” in key political swing states.

Facebook said that in recent months it had remedied the deficiencies that researchers identified in their study. The company, for example, has sought to require more information about Facebook pages – who is behind them and who is paying for their ads.

“Our authorisation and transparency measures have meaningfully changed since this research was conducted,” spokesman Joe Osborne said in a statement. “We offer more transparency into political and issue advertising than TV, radio or any other digital ad platform.”

But Edelson, one of the authors of the NYU study, said some concerns persist – including fears that Facebook isn’t aggressively enforcing its own rules.

“Facebook’s ad platform and their transparency mechanisms were simply not built with security in mind,” Edelson said.

The researchers’ findings could seed further doubt among regulators and the public about Facebook’s preparedness for the 2020 presidential election. In 2016, Russian agents used narrowly targeted political ads to bait unsuspecting users into joining seemingly innocuous pages and groups, where they were then bombarded with divisive and false posts, photos and videos.

In recent years, Facebook has sought to toughen its defences. It hired more workers to review its site and put in place new policies to stamp out what it labels “coordinated, inauthentic behaviour,” resulting in the removals of accounts and other content linked to Russia and malicious actors. Facebook CEO Mark Zuckerberg since has touted recent successes in elections around the world, including the 2018 congressional midterms.

Central to Facebook’s transparency efforts is its ad archive, which it unveiled in 2018 under pressure from lawmakers. The public repository shows that campaigns, businesses and other organisations have spent roughly $1.1bn (£842m) on ads since the repository came online. But it also reflects a wide array of misleading or troubling ads about drugs, insurance and housing, and false, paid political posts from candidates including President Donald Trump, that Facebook has refused to remove.

In their study, NYU researchers point to other troubles. Sixteen clusters of Facebook pages, for example, purchased roughly $3.8m in potentially problematic political ads. These “inauthentic communities” included pages named “Our Part of Ohio”, which focused on users in the state, or “Giving Care”, which primarily served as a hub for seniors. Much of the content on these pages was apolitical, the NYU report found, but periodically they would purchase political ads that contained similar or near-identical text.

Zuckerberg has been under pressure to better control political advertising on his social media site for some time now (Getty)

In May, for example, the page Giving Care ran a national ad about the high costs of prescription drugs viewed up to 5,000 times. Another page, called “Middle Class Voices of Pennsylvania,” ran the exact same ad, at the exact same time, reaching many of the same states. Neither page, however, indicated that it might be affiliated with the other, or disclosed the person or organisation that funded the ad, the report found. That raised concerns among researchers that the activity might be coordinated and inauthentic. Attempts to reach the owners of the pages over Facebook were unsuccessful.

Some businesses, meanwhile, engaged in “astroturfing” – setting up seemingly fake entities to push messages that benefit their for-profit businesses. In one example, researchers found ads from pages called “Isabella Wind” and “Neosho Ridge Wind” that paid to promote the exact same message – about the economic benefits of wind power for farmers – in different parts of the country. Only by navigating off Facebook would a user discover they are part of the same energy firm.

Still another cluster of 13 pages identified by researchers sold questionable insurance products, including “TrumpCare,” seeking to play off the president’s supporters to sell coverage. One page in this group, called “National Veteran Loans,” sought to pitch former military service members in Nevada, Florida and elsewhere on home-financing options.

These Facebook pages often were viewed by older users, researchers said, relying on seemingly innocuous names. But some of the ads didn’t link to legally registered businesses, the report found, concluding they are “likely violating Facebook’s policies.” Some of the organisations continue to advertise on Facebook.

Facebook pointed to some of the steps it has taken since researchers concluded their report. The company in October began requiring Facebook pages that purchase political ads to provide more information about their identities, such as their tax identification number. And the tech giant says it now requires suspicious pages to verify who is behind them and share that information publicly.

Edelson and her fellow researchers acknowledged some of those changes would help, particularly in ensuring large pages and significant spenders are more transparent in their ads. But, she said, Facebook’s efforts to protect voters from manipulation largely come down to its own vigilance.

“Enforcement really needs to be stepped up,” she said.