Over the past 20 years, we’ve grappled with the tension between the freedom of information the web enables and the need to ensure trust in information. Elevating accurate, quality content, and stemming the flow of misinformation is a challenge that requires collaboration across the news industry, the research community, and digital platforms.

Here are some of the steps we're taking on the issue.

Increasing the integrity of information we display during breaking news

During breaking news or crisis situations, stemming the tide of misinformation can be challenging. Speculation can outrun facts as legitimate news outlets on the ground are still investigating. At the same time, bad actors are publishing content on forums and social media with the intent to mislead and capture people’s attention as they rush to find trusted information online.

To reduce the visibility of this type of content during crisis or breaking news events, we’ve improved our systems to put more emphasis on authoritative results over factors like freshness or relevancy. This builds on the search quality improvements we announced last year. At the moment, this is in only in the U.S. but we’ll roll it out globally in the coming months .

There are comparable challenges on YouTube, which is learning from and adapting some of the work done by Google Search. YouTube now highlights relevant content from verified news sources in a “Breaking News” section on its homepage and in a “Top News” shelf in search results.

Collaborating with the industry to surface accurate information

Giving publishers the ability to better structure their data or embed quality signals can help platforms like Google more easily recognize quality content. That’s why we’re involved in the Trust Project, which developed eight indicators of trust publishers can use to better convey why their content should be seen as credible.

There’s proof that applying these indicators helps builds trust and counters the negative impact of misinformation. For example, after the Trinity Mirror in the U.K. implemented the Trust Project, consumer trust in the newspaper increased by 8 percent.

Building on that work, we’re partnering with the Credibility Coalition to drive the development of technical markers that can enable third party assessments of online content. This summer, in partnership with the World Wide Web Consortium (W3C)’s newly created community group, the Credibility Coalition will explore new approaches to analyze and assess the credibility of information online.

In 2016 we introduced Fact Check tags in Google News to help people understand what they are clicking on and reading. We’ve since expanded to other products like Google Search. Today we’re partnering with the National Academies of Sciences, Engineering, and Medicine, The New York Times Health team and Memorial Sloan Kettering Cancer Center to focus on the integrity and accuracy of health information found on the web.

And starting on April 2—International Fact-Checking Day—the Google News Initiative alongside the International Fact Check Network will offer advanced trainings on tools to distinguish misinformation online to more than 20,000 students globally.

Many countries will hold elections in 2018, including Mexico, Brazil, Indonesia and the U.S. As part of our broader election work at Google to combat mis- and disinformation during election cycles and beyond, we’re providing support to our partner First Draft in the launch of a “Disinfo Lab.” Based at Harvard, the lab will employ journalists to leverage computational tools to monitor misinformation in the run-up to and during elections.

We also want to support the global research community in their efforts to train new models to detect synthetic, computer-generated voice and video files. Soon we’ll release datasets that can be used to train such models to detect synthesized audio content, and make them available to journalism and research communities.

Helping young people distinguish quality content online

Media literacy has emerged as one of the most important issues of our digital age. In a study from the Stanford History Education Group, 93 percent of college students couldn’t flag a lobbyist’s website as biased, and 82 percent of middle schoolers couldn’t distinguish sponsored content from real news.



We’ve already supported media literacy programs in the U.K., Brazil and Canada, but there’s more we can do. So today, we’re launching a $10 million global initiative from Google.org to find ways to tackle the challenge.

The first project in this global effort is MediaWise, a U.S.-based partnership bringing together the Poynter Institute, Stanford University Education Group, and the Local Media Association. Supported by a $3 million Google.org investment, MediaWise is a media literacy project designed to help millions of young people in the U.S. discern fact from fiction online, through classroom education and video—with a little help from several teen-favorite YouTube creators.

People need journalism they can count on. But it will take collaboration and working with partners across a broad spectrum to provide good solutions to the problem of misinformation, and help build a more informed world.