Facebook's fake news fighting features are pictured. | Facebook Facebook: We'll start flagging fake news

Facebook announced steps Thursday to stem the spread of "fake news" on its website, which has taken fire for easing the spread of blatantly false information to voters during the 2016 campaign.

Among other steps, the social network said it will start flagging "the worst of the worst" hoaxes shared on the site, which draws more than a billion users in an average day.


"We're committed to doing our part," said Facebook Vice President of Product Development Adam Mosseri in a post announcing the move. "It's important to us that the stories you see on Facebook are authentic and meaningful."

The company's new approach relies on the work of nonpartisan fact checkers assembled under the banner of the Poynter International Fact-Checking Network, a global project of the media institute that also owns the Tampa Bay Times.

Under the new system, when Facebook users attempt to post a story that Poynter-affiliated fact checkers have rebutted, they'll get a pop-up saying, "Before you share this story, you might want to know that independent fact-checkers disputed its accuracy." If the user opts to go ahead, the post will still appear on their friends' News Feeds, but it will be tagged with red danger-style signal indicating its veracity is in dispute — with a link to a fact checker's debunking.

In a story in POLITICO last week, Alexios Mantzarlis, director and editor of the Poynter Institute's International Fact-Checking Network, said that while "Facebook has completely turbo-powered fake news sites ... it's also probably the first platform that could measure how these things spread, and how we could push back."

Among the other steps Facebook is taking is to make it easier for users to alert both Facebook and their fellow users that something they see on the site seems fake; the reports on flagged stories, says Facebook, will be fed back to the fact checkers to make it easier for them to target their efforts.

With this plan, Facebook is leaning heavily on a small number of fact checkers. The members of the so-called Big Three in American fact checking — FactCheck.org, PolitiFact and The Washington Post's Fact Checkers — employ fewer than 20 people.

Facebook's experiment, too, may spark concern from those who argue that those third-party fact checkers come to the effort with their own biases and lenses.

The Poynter network, according to Mantzarlis, has no formal membership, though some 40 organizations around the world have signed onto its code of principles. The project gets funding from, among others, Google, the Omidyar Network, the Bill & Melinda Gates Foundation and the Open Society Foundations.

Facebook has already found itself the target of political ire for its curating of news, most notably after a report surfaced of the company stripping right-leaning news from the site's "trending" box. That episode was a particular affront to conservatives, who complain they are treated unfairly by so-called mainstream news organizations and have turned to social media as a favored platform.

Mark Zuckerberg has resisted both the idea that "fake news" on Facebook had anything to do with the outcome of the 2016 election and that Facebook is a media company. But Thursday's announcement — along with an earlier post detailing his work-in-progress thinking on tackling the challenges posed by online hoaxes — suggest that he's seeing the topic as a more serious threat.

The risks of being cast as an online gatekeeper might be tempered by the company's focus on what Mosseri, the Facebook vice president, calls "the clear hoaxes spread by spammers for their own gain." Researchers say that addressing even a handful of viral posts can significantly reduce the amount of false material appearing on the social network.

"Let's not underestimate how important it is to act on the biggest fakes," Mantzarlis previously told POLITICO.

Facebook said Thursday that it will also be trying out a handful of other ways of facing off with fake news, like preventing flagged stories from being used as the basis of ads and weakening the News Feed-ranking of stories that users are less likely to share after reading — a possible signal, the company has said, that users feel that the story is misleading.

"We believe in giving people a voice and that we cannot become arbiters of truth ourselves, so we're approaching this problem carefully," wrote Mosseri in his post. "We'll learn from these tests, and iterate and extend them over time."