Restricting live features: We updated enforcement of our live streaming policy to specifically disallow younger minors from live streaming unless they are clearly accompanied by an adult. Channels not in compliance with this policy may lose their ability to live stream. We also launched new classifiers (machine learning tools that help us identify specific types of content) on our live products to find and remove more of this content.

Disabling comments on videos featuring minors: We disabled comments on tens of millions of videos featuring minors across the platform, to limit the risk of exploitation. Additionally, we implemented a classifier that helped us remove 2x the number of violative comments. We recognize that comments are a core part of the YouTube experience and creators have told us they feel we removed a valuable way for them to connect with and grow audiences. But we strongly believe this is an important step to keeping young people safe on YouTube.

Reducing recommendations: We expanded our efforts from earlier this year around limiting recommendations of borderline content to include videos featuring minors in risky situations. While the content itself does not violate our policies, we recognize the minors could be at risk of online or offline exploitation. We’ve already applied these changes to tens of millions of videos across YouTube.

Responsibility is our number one priority, and chief among our areas of focus is protecting minors and families. Over the years, we’ve heavily invested in a number of technologies and efforts to protect young people on our platform, such as our CSAI Match technology . And in 2015, because YouTube has never been for kids under 13, we created YouTube Kids as a way for kids to be able to safely explore their interests and for parents to have more control. Accounts belonging to people under 13 are terminated when discovered. In fact, we terminate thousands of accounts per week as part of this process.We also enforce a strong set of policies to protect minors on our platform, including those that prohibit exploiting minors, encouraging dangerous or inappropriate behaviors, and aggregating videos of minors in potentially exploitative ways. In the first quarter of 2019 alone , we removed more than 800,000 videos for violations of our child safety policies, the majority of these before they had ten views.The vast majority of videos featuring minors on YouTube, including those referenced in recent news reports, do not violate our policies and are innocently posted — a family creator providing educational tips, or a parent sharing a proud moment. But when it comes to kids, we take an extra cautious approach towards our enforcement and we’re always making improvements to our protections. Here are a few updates we’ve made over the past several months:Over the last 2+ years, we’ve been making regular improvements to the machine learning classifier that helps us protect minors and families. We rolled out our most recent improvement earlier this month. With this update, we’ll be able to better identify videos that may put minors at risk and apply our protections, including those described above, across even more videos.To stay informed of the latest research and advances in child safety, we work with civil society and law enforcement. In the last two years, we've shared tens of thousands of reports with NCMEC, leading to numerous law enforcement investigations.Additionally, we share our technologies and expertise with the industry, and consult with outside experts to complement our team of in-house experts.YouTube is a company made up of parents and families, and we’ll always do everything we can to prevent any use of our platform that attempts to exploit or endanger minors. Kids and families deserve the best protection we have to offer: We’re committed to investing in the teams and technology to make sure they get it.Updated stats on June 3