A news story that's been labeled false by Facebook's third-party fact-checking partners sees its future impressions on the platform drop by 80%, according to new data contained in an email sent by a Facebook executive and obtained by BuzzFeed News.

The message also said it typically takes "over three days" for the label to be applied to a false story, and that Facebook wants to work with its partners to speed the process.

The data about the effectiveness of Facebook's fact-checking partnership initiative was contained in a brief email sent today by Jason White, Facebook's manager of news partnerships, to the company's fact-checking partners.

"We have been closely analyzing data over several weeks and have learned that once we receive a false rating from one of our fact checking partners, we are able to reduce future impressions on Facebook by 80 percent," White wrote.



A Facebook spokesperson told BuzzFeed News that the system begins to "demote" a story in the News Feed after a single fact-checker finds it to be false. The label is then applied to a link once at least two checkers rate it false.

The statistic about the reduced spread of fact-checked stories (which was not accompanied by additional information about how that figure was arrived upon) is the first time Facebook has shared internal data about its checking program. White's email emphasized that the company wants to work with its partners "to surface these hoaxes sooner" due to the lag time between a hoax being published and the label being applied.

"It commonly takes over 3 days, and we know most of the impressions typically happen in that initial time period," White wrote. "We also need to surface more of them, as we know we miss many."

Facebook has been working with external fact-checkers like PolitiFact and Snopes since December in an effort to reduce the spread of false stories on its platform. The checkers are given access to a special tool where they can view stories being shared on Facebook that are flagged as potentially worthy of a fact check. If two or more checkers deem a link to be false, Facebook ads a label to inform users that it has been flagged by fact-checkers.

From the moment of its launch, the efficacy of the disputed label has been questioned. In May, The Guardian reported that some publishers of false stories saw shares of their content increase after the disputed label was applied. Last month, Politico cited data from Yale researchers that found the label "has only a small impact on whether readers perceive their headlines as true."

White's email is the first time the company has provided its own data to back up public statements from executives that the fact checks and labels do help stop a story from being seen on the platform. It also gives the fact-checking partners the first tangible sense of the impact of their work.

Facebook has also long emphasized that data gathered from fact checks help inform decisions made by News Feed algorithms in terms of what content to surface for users, and that this ultimately has more effect than the public-facing label. But the checkers have been asking for data from Facebook since the early days of the program.