As it scrambles to stem the spread of fake news on its network, Facebook is rolling out features in Canada that allow users to see more information about the sources behind content that goes viral across its platform.

The social media giant said it plans to introduce a button that will appear next to links on Facebook’s news feed allowing users to see more information about the source of the link, including details from the publisher’s Wikipedia page, other recent articles from the same source and whether friends have shared the same news story.

Facebook, which said the features will begin to roll out in Canada on Wednesday, previously tested the feature on news articles in the United States and Britain, but plans to expand it to all links posted on its platform.

Story continues below advertisement

The changes are the latest in the Menlo Park, Calif.-based company’s efforts to respond to intense pressure to crack down on a proliferation of fake news, misinformation and foreign political interference on its platform ahead of U.S. midterm elections in November and a Canadian federal election next year.

The move also comes as politicians are turning up the heat on U.S. social media firms. U.S. Attorney-General Jeff Sessions is set to meet with several state attorneys-general on Sept. 25 to discuss concerns that social media companies “may be hurting competition and intentionally stifling the free exchange of ideas on their platforms,” the Justice Department said in a statement. Top executives from Facebook and Twitter testified before the U.S. Senate Intelligence Committee earlier this month.

Facebook has been keen to show lawmakers it is making strides in its efforts to block bad behaviour on its platform.

The company has made more than a dozen changes to its sites over the past 18 months since it came under fire for allowing Russian-linked pages and accounts to purchase thousands of divisive political ads during the 2016 U.S. presidential election. It has deleted more than one billion fake accounts, is working to hire 10,000 new content moderators, and updated its news feed to prioritize posts from friends and family over third-party content from publishers.

Some of the ways Facebook identifies fake accounts is by looking for profiles that repeatedly post the same information or links or show a sudden surge in messaging and posting activity. The company has said fake accounts are a major source of spam and misleading content that appears on Facebook.

Facebook said it is opening a U.S. elections war room in its headquarters staffed by employees who can make real-time decisions on political content that appears on the site, and is launching automated messenger bots in Brazil ahead of presidential elections next month that can help users fact-check and verify political news they read on the site.

In Canada, Facebook introduced new transparency tools for political ads and announced a partnership earlier this year with Agence France-Presse, a news agency headquartered in France, to fact-check Canadian news stories in both French and English.

Story continues below advertisement

“In 2016, we were not prepared for the co-ordinated information operations we now regularly face,” Facebook chief executive Mark Zuckerberg wrote in a recent lengthy blog post. “But we have learned a lot since then and have developed sophisticated systems that combine technology and people to prevent election interference on our services.”

Despite those efforts, last month Facebook said it had uncovered new political influence campaigns led by hundreds of fake accounts linked to Russia and Iran that primarily targeted users in the Middle East, Latin America and Britain. The social media firm previously revealed that it had shut down 32 pages that appeared to be part of a foreign political-influence campaign targeting the coming midterm elections.

Even as it struggles to stem new threats, Facebook’s efforts to curb fake news appear to be having some success, according to a study last week by New York University and Stanford University.

The researchers examined how Facebook and Twitter users interacted with 570 websites to post false or misleading news stories between January, 2015, and July, 2018. They found that shares, likes and comments rose sharply in the months before and after the 2016 U.S. presidential election. But they have since plummeted by more than half on Facebook, while continuing to rise on Twitter.

“While this evidence is far from definitive, we see it as consistent with the view that the overall magnitude of the misinformation problem may have declined, at least temporarily, and that efforts by Facebook following the 2016 election to limit the diffusion of misinformation may have had a meaningful impact,” they wrote.

However, the authors warned that false news is still a major problem on social media, with the largest sites posting fake and misleading content still attracting roughly 70 million interactions a month from users as of July, along with four to six million interactions on Twitter.

Story continues below advertisement

“The absolute quantity of fake news interactions on both platforms remains large,” they wrote. “Facebook in particular has played an outsized role in its diffusion.”​