On 28 August 2017, social media giant Facebook announced a new measure aimed at penalizing web sites that publish false information by blocking repeat offenders from purchasing ads that would help them grow their audiences.

In a statement, product managers Satwik Shukla and Tessa Lyons wrote:

Over the past year we have taken several steps to reduce false news and hoaxes on Facebook. Currently, we do not allow advertisers to run ads that link to stories that have been marked false by third-party fact-checking organizations. Now we are taking an additional step. If Pages repeatedly share stories marked as false, these repeat offenders will no longer be allowed to advertise on Facebook. This update will help to reduce the distribution of false news which will keep Pages that spread false news from making money. We’ve found instances of Pages using Facebook ads to build their audiences in order to distribute false news more broadly. Now, if a Page repeatedly shares stories that have been marked as false by third-party fact-checkers, they will no longer be able to buy ads on Facebook. If Pages stop sharing false news, they may be eligible to start running ads again.

Facebook came under scrutiny after the 2016 U.S. presidential election, in which President Donald Trump walked away with an unexpected electoral college-driven victory amid rumors of foreign interference. The site has since been introducing measures to combat the spread of false information. In the lead-up to the 8 August 2017 presidential election in Kenya, the company took out newspaper ads with guidelines on spotting “fake news”:

#Facebook takes out full-page add in #Kenya newspaper to offer advice on identifying fake news. A brave new world #KenyaDecides #fakenews pic.twitter.com/56R6KVYqWp — Matina Stevis (@MatinaStevis) August 3, 2017

The site purchased similar ads in the United Kingdom ahead of the general election there as well.

In a 27 April 2017 paper, Facebook threat intelligence team members Jen Weedon and William Nuland, and chief security officer Alex Stamos noted that false information can be created for nefarious purposes and amplified by both fake accounts and unsuspecting members of the public, and generally falls under four categories:

Information (or Influence) Operations – Actions taken by governments or organized non-state actors to distort domestic or foreign political sentiment, most frequently to achieve a strategic and/or geopolitical outcome. These operations can use a combination of methods, such as false news, disinformation, or networks of fake accounts (false amplifiers) aimed at manipulating public opinion. False News– News articles that purport to be factual, but which contain intentional misstatements of fact with the intention to arouse passions, attract viewership, or deceive. False Amplifiers – Coordinated activity by inauthentic accounts with the intent of manipulating political discussion (e.g., by discouraging specific parties from participating in discussion, or amplifying sensationalistic voices over others). Disinformation – Inaccurate or manipulated information/content that is spread intentionally. This can include false news, or it can involve more subtle methods, such as false flag operations, feeding inaccurate quotes or stories to innocent intermediaries, or knowingly amplifying biased or misleading information. Disinformation is distinct from misinformation, which is the inadvertent or unintentional spread of inaccurate information without malicious intent.

The Facebook product managers said in the Monday statement the company is responding by “disrupting the economic incentives to create false news; building new products to curb the spread of false news; and helping people make more informed decisions when they encounter false news”.

In December 2016, Facebook tapped third-party fact checking organizations, including snopes.com, to participate in a program to help flag false stories on the Internet.