A major new academic study has found Facebook's fact-checking methods, aimed at identifying and neutralizing 'fake news' on the network, is ineffective - in part because individuals don't trust the mainstream media

A Yale University study has found Facebook's much-touted anti-fake news strategy — of fact-checking stories and tagging inaccurate content — doesn't work.

The research found tagging false news stories as "disputed by third party fact-checkers" had a meager impact on whether readers perceived headlines to be true. Overall, the existence of "disputed" tags made participants just 3.7 percent more likely to correctly judge headlines as false.

For some groups — particularly supporters of President Donald Trump, and adults under 26 — flagging bogus stories actually ended up increasing the likelihood users would believe fake news.

​The researchers believe the sheer volume of information that floods the social media network makes it impossible for fact-checking groups Facebook has partnered with —Politifact, FactCheck.org and Snopes.com — to scrutinize every story. Moreover, the existence of flags on some stories made individuals even more likely to believe any story that was not flagged. The team believe the results show it's unclear whether Facebook's efforts in this regard can even be considered a net positive.

Trump vs Clinton Supporters

The study involved over 7,500 people — researchers presented participants in a control group with 24 randomly mixed together headlines, 12 true and 12 false, and asked them to rate the accuracy of the headlines, all pulled from stories that were posted on Facebook in 2016 or 2017. In this control group, participants correctly judged real news stories as accurate 59.2 percent of the time, while incorrectly believing false stories 18.5 percent of the time.

The experiment was then repeated with additional groups, except six of the 12 the fake news stories were flagged as "disputed."

In the first control group, Trump supporters believed 18.5 percent of false headlines and 58.3 percent of real stories were accurate. The existence of flags made them 2.9 percents more likely to correctly judge a "disputed" story as false, but also 1.8 percent more likely to think an unflagged fake news story was true. They became 1.2 percent points more likely to correctly judge real stories as accurate.

While it may appear from those numbers the flags help at least a little, the researchers are concerned the volume of unflagged fake news is so high the negative impact from those stories overwhelms any benefit.

© REUTERS / Jonathan Ernst US presidential nominees Hillary Clinton (top) and Donald Trump speak at campaign rallies in Cedar Rapids, Iowa, US October 28, 2016 and Delaware, Ohio October 20, 2016 in a combination of file photos.

For Clinton supporters, who started at 18.5 percent on false news and 60 percent on true news, adding flags made them 4.3 percent more likely to correctly identify fake news, and 2.3 percent more likely to correctly judge accurate news as real.

Part of the reason for the discrepancy between Trump and Clinton supporters may come from attitudes toward media, the researchers believe —. As part of the study, Rand and Pennycook asked participants how much they. On a scale of 1-5, with five being more trusted, Clinton supporters rated their trust in third-party fact checkers at 3.1 out of 5, Trump supporters 2.4.

Not Credible

Perhaps the most damning findings of the study, however, related not to Facebook's fake news fighting efforts, but the mainstream media. For one, the younger participants were, the less likely they were to place stock in flags, which the team suggest indicates declining trust in media among younger generations.

Most significantly however, the team ran a version of the study in which they displayed media publications' logos prominently next to headlines, to heighten awareness of a story's source. The logos had zero effect — suggesting individuals don't find mainstream media outlets "particularly credible."

The study is not without potential shortcomings — most significantly perhaps, the study was conducted using a survey website, not through the actual Facebook platform. However, it would not be the first indication the social media giant's ambitions in this regard are fruitless.

News on Facebook create the illusion of knowledge and reduce motivation to search for information https://t.co/WadAsIb32q #fakenews pic.twitter.com/ftIPtTZgnF — Ana Isabel Canhoto (@canhoto) April 28, 2017

​Facebook's early pilot of a fact checking provision was rolled out in Germany in April , allowing users to flag potential "fake news" — the content was then sent to an independent verification team for analysis. Early analyses of the approach suggested it was doubtful non-experts could tell the difference between fake and non-fake content, and that highlighting a story as untrue could actually make people remember the story as true.

​German investigative journalism center Correctiv also started its own dedicated website, "Echtjetzt" ("Really now?"), to debunk false information, the results of which were decidedly mixed. The system requires a second team of fact-checkers to verify Correctiv's work, meaning it can be hours — if not days or weeks — before a story can be conclusively demonstrated to be untrue.

Moreover, Facebook itself has acknowledged the reach of both "false amplifiers" and "fake news" is miniscule on the platform, with such content accounting for one tenth of one percent of overall civic engagement on Facebook. As such, any fake news fighting techniques may be more trouble than they're worth.