Here we are again. Once again, content control on YouTube is being questioned, and once again advertisers are pulling out over concerns that their brands are appearing alongside inappropriate content. This time, it’s how videos of minors are being used to create what one video blogger called a “soft-core pedophilia ring.”

YouTuber Matt Watson posted a video on Sunday detailing YouTube comments that were being used to identify and pass along videos of minors doing seemingly harmless activities like yoga, gymnastics, and dancing. Watson’s 20-minute video–now viewed more than 2 million times–demonstrates how commenters would identify specific time stamps to find sexually suggestive material and, if users clicked on one of the videos, YouTube’s algorithms recommended similar ones.

Of course, all throughout this disturbing process, brand ads appear right alongside the videos. As Wired UK detailed, such brands as Alfa Romeo, Fiat, Fortnite, Grammarly, L’Oreal, and Maybelline were among those found. As a result, Disney, Nestle, Epic Games, and others announced they were pulling ads from YouTube altogether. Sound familiar?

In early 2017, YouTube was lambasted over ads appearing alongside racist content and terrorist group videos, prompting Verizon, Johnson & Johnson, and AT&T, to name three notable brand marketers, to temporarily pull their advertising. There have been other incidents since then. But after a fair amount of time, and much industry hand-wringing over brand safety (some companies have even established permanent chief brand safety officer positions), most advertisers eventually made their way back to YouTube. The platform’s scale is simply too vast to ignore. Just last month, AT&T announced that it was heading back to YouTube after a two-year brand safety hiatus. Now the company tells the New York Times today, “Until Google can protect our brand from offensive content of any kind, we are removing all advertising from YouTube.”

Every minute, up to 300 hours of video content is uploaded to YouTube. That insanely big number represents the size and scale of both the opportunity and risk facing brand advertisers. Despite a lot of efforts by both YouTube and advertisers to improve brand safety on the platform over the last two years, with moves like more third-party measurement tools to track where ads appear, internal context-analysis technologies that can flag unsafe text and images, and constant purging of unsafe and bogus accounts, what this latest incident illustrates is that the fight against offensive and inappropriate content will always be an issue. There is no ultimate fix.

One executive at a major global media agency who agreed to comment on background said there is no such thing as 100% safety when it comes to user-generated content, and marketers need to know that although there can be a zero-tolerance effort, there’s no such thing as 100% brand safety or 0% risk.

For its part, YouTube has responded pretty quickly. Over the last two days, the platform has taken a more aggressive approach, beyond its normal protections, and disabled comments on tens of millions of videos that include minors; reviewed and removed thousands of inappropriate comments that appeared against videos with young people in them; terminated more than 400 channels for the comments left on videos; removed dozens of videos that were posted with innocent intentions but clearly put young people at risk; and reviewed and excised autocompletes that could have increased the discoverability of the offensive content that’s against its policies.