This is interesting for multiple reasons. First, it highlights just how bad YouTube's problem is. YouTube uses both algorithms and human moderators to police content on its site. Contractors rate content, which trains YouTube's AI on what consists of "high quality" content. The problem is that contractors were told that videos with high production values got an automatic "high quality" rating, regardless of any objectionable content.

It also puts the spotlight on YouTube's issues with its advertisers. Multiple big-name companies pulled their ads from YouTube in November after the discovery that their ads were running alongside child exploitation videos (or innocent videos that had comments rank with pedophilia). The advertisers made clear that they wouldn't be returning until Google put appropriate safeguards into place. It wasn't the first time this had happened either.

Now, the fact that JP Morgan has had success with its own algorithm, which has 17 layers that separates "safe" channels from unsafe ones, reveals just how much advertisers mistrust Google and YouTube. But it also is telling that a company is able to build an AI to identify these sorts of issues, yet Google is still having so much trouble with its own methods. It remains to be seen whether YouTube will get its problems under control, or if more advertisers will flee the service.