So how does this actually work? Determining what is harmful misinformation or borderline is tricky, especially for the wide variety of videos that are on YouTube. We rely on external evaluators located around the world to provide critical input on the quality of a video. And these evaluators use public guidelines to guide their work. Each evaluated video receives up to 9 different opinions and some critical areas require certified experts. For example, medical doctors provide guidance on the validity of videos about specific medical treatments to limit the spread of medical misinformation. Based on the consensus input from the evaluators, we use well-tested machine learning systems to build models. These models help review hundreds of thousands of hours of videos every day in order to find and limit the spread of borderline content. And over time, the accuracy of these systems will continue to improve.Our work continues. We are exploring options to bring in external researchers to study our systems and we will continue to invest in more teams and new features. Nothing is more important to us than ensuring we are living up to our responsibility. We remain focused on maintaining that delicate balance which allows diverse voices to flourish on YouTube — including those that others will disagree with — while also protecting viewers, creators and the wider ecosystem from harmful content.

1 Based on the 28-day average from 9/17/19 - 10/14/19, compared to when we first started taking action on this type of content in January 2019.



From the timeline: