YouTube has been largely overlooked in the national discussion about the spread of fake news, which typically centers around Facebook and Twitter. But that is changing. Last week, writer James Bridle posted on Medium about the booming world of online videos for children. Bridle explores a world of automatically generated videos based on children’s content on YouTube, as well as human-generated parodies of kid content that might show up in their feeds. It’s too easy for scammers to harvest revenue from traffic from kids’ attention. “I suppose it’s naive not to see the deliberate versions of this coming,” Bridle writes. YouTube has said it will implement new policies in response.

ICYMI: Breitbart’s “sinister” response to Washington Post‘s recent bombshell

But it’s only one of many problems that Google (which owns YouTube) is dealing with in surfacing good information, whether on Google, Google News, or YouTube. While the platform does not want to hide any content (except videos produced to promote terrorism), it continues to get bad press when it surfaces fake news or conspiracy theories as primary results. Google was heavily criticized last month after its search promoted a 4chan post spreading misinformation about the Vegas shooter. And, of course, Google also recently testified alongside Facebook and Twitter in Congressional hearings about potential Russian use of their platforms to influence the election.

In the wake of the Sutherland Springs shooting, The New York Times podcast The Daily and others have pointed out how YouTube, especially in the information vacuum following a violent incident, will prioritize clips of reactionary commentators making up information above actual news reports.

ICYMI: Why people are concerned about the Times‘s Monday Trump administration scoop

This isn’t just bad for the consumer, it’s also bad for business. In September, Forbes reported that YouTube advertisers were “spooked” that their ads would be sold against questionable content. And YouTube’s response, in turn, had a negative effect when many legitimate YouTubers also saw their videos de-monetized. The company is now experimenting with other options for creators to make money—so that the few bad apples don’t end up ruining the harvest.

Sign up for CJR 's daily email

For a history of changes on YouTube, visit our handy platform timeline.

More on platforms and journalism:

YouTube is also taking a stronger stance on extremist content, removing more of the jihadist cleric Anwar al-Awlaki’s sermons from its platform. Previously the platform limited removal of videos to hateful or violent content, but it has expanded its purview.

Sociologist Tressie McMillan Cottom observes that the Facebook ads shown to her changed when she was reaching out to various populations for her research on non-traditional students. I’m curious: have any journalists experienced the same? Let us know at the email below.

The Tow Center for Digital Journalism’s Research Director Jonathan Albright has also written (before joining Tow) about how AI can easily generate fake news en masse on YouTube by packing segments with keywords.

“Even small independent news outlets can have a dramatic effect on the content of national conversation,” says a new, five-year study from Harvard touting the advantages of being a smaller, independent outlet in this day and age. This study nicely complements the findings from a study at the Tow Center, set to go live in a few hours, on how local newsrooms can capitalize on the unique reporting they bring to the field.

Other notable stories

A scary story in Wired, “How one woman’s digital life was weaponized against her.”

ICYMI: Twitter’s bot problem isn’t going away

Has America ever needed a media watchdog more than now? Help us by joining CJR today

Nausicaa Renner is digital editor of CJR.