From our founding days at Google, our mission has always been to make information universally accessible and useful. We believe strongly in the freedom of speech and expression on the web—even when that means we don’t agree with the views expressed.

At the same time, we recognize the need to have strict policies that define where Google ads should appear. The intention of these policies is to prohibit ads from appearing on pages or videos with hate speech, gory or offensive content. In the vast majority of cases, our policies work as intended. We invest millions of dollars every year and employ thousands of people to stop bad advertising practices. Just last year, we removed nearly 2 billion bad ads from our systems, removed over 100,000 publishers from our AdSense program, and prevented ads from serving on over 300 million YouTube videos.

However, with millions of sites in our network and 400 hours of video uploaded to YouTube every minute, we recognize that we don't always get it right. In a very small percentage of cases, ads appear against content that violates our monetization policies. We promptly remove the ads in those instances, but we know we can and must do more.

We’ve heard from our advertisers and agencies loud and clear that we can provide simpler, more robust ways to stop their ads from showing against controversial content. While we have a wide variety of tools to give advertisers and agencies control over where their ads appear, such as topic exclusions and site category exclusions, we can do a better job of addressing the small number of inappropriately monetized videos and content. We’ve begun a thorough review of our ads policies and brand controls, and we will be making changes in the coming weeks to give brands more control over where their ads appear across YouTube and the Google Display Network.

We are committed to working with publishers, advertisers and agencies to address these issues and earn their trust every day so that they can use our services both successfully and safely.