For many people, YouTube is a place to kill time by watching sailing videos, or to pick up tips on how to train their dog, or change a car headlight. But the Google-owned video service also has a darker side, according to a number of news articles, including one from the New York Times last year. Some users, these stories say, start out looking at innocuous videos, but get pushed in the direction of radical, inflammatory or even outright fake content. Those pushes come from YouTube’s recommendation algorithm, which some argue has turned the service into a “radicalization engine.” Does the network and the software that powers its video suggestions actually turn users into consumers of far-right conspiracy theories and other radical content, and if so what should be done about it?

Those are some of the questions we at CJR wanted to address, so we used our Galley discussion platform to convene a virtual panel of experts in what some call “automated propaganda,” including Dipayan Ghosh of Harvard’s Shorenstein Center, New York Times columnist Kevin Roose—who wrote last year’s Times piece on YouTube radicalization—as well as Brazilian researcher Virgilio Almeida, Aviv Ovadya of the Thoughtful Technology Project, former YouTube programmer Guillaume Chaslot, and Harvard misinformation researcher Joan Donovan. One trigger for this discussion was a research paper published recently that not only said YouTube isn’t a radicalization engine, but argued that its software actually does the opposite, by suggesting videos that push users toward mainstream content. As part of our virtual panel, we spoke to a co-author of that paper, Mark Ledwich.

In Twitter posts and on Medium, Ledwich took direct aim at the New York Times and Roose for perpetuating what he called “the myth of YouTube algorithmic radicalization.” In reality, he said, this theory showed that “old media titans, presenting themselves as non-partisan and authoritative, are in fact trapped in echo chambers of their own creation, and are no more incentivized to report the truth than YouTube grifters.” One of the main criticisms of the paper—which came from others in the field such as Arvind Narayanan of Princeton—was that the research was based on anonymized data, meaning none of the recommendations were personalized, the way they were in the New York Times piece (which used personal account data provided by the subject of the story). In his Galley interview, Ledwich pointed out that much of the research that others have used to support the radicalization theory is also based on anonymized data, in part because personalized data is so difficult to come by.

ICYMI: Does the media capital of the world have news deserts?

Although some argued that Ledwich’s study may not have found radicalization because of changes that have been made to the YouTube algorithm (as a result of critical coverage in the Times and elsewhere), another recent study by Virgilio Almeida and his team found significant evidence of radicalization. That research looked at more than 72 million comments across hundreds of channels, and found that “users consistently migrate from milder to more extreme content,” Almeida said in his Galley interview. Shorenstein Center fellow Dipayan Ghosh, who directs the Platform Accountability Project at Harvard’s Kennedy School and was a technology adviser in the Obama White House, told CJR that “given the ways YouTube appears to be implicating the democratic process, we need to renegotiate the terms of internet regulation so that we can redistribute power from corporations to citizens.”

YouTube has said that its efforts to reduce the radicalization effects of the recommendation algorithm have resulted in users interacting with 70 percent less “borderline content,” as the service describes content that is problematic but doesn’t overtly break its rules (although it didn’t give exact numbers, or define what “borderline content” consists of). As Ghosh, Almeida, and other researchers have pointed out, the biggest problem when it comes to determining whether YouTube pushes users toward more radical content is a lack of useful data. As is the case with Facebook and similar concerns about its content, the only place to get the data required to answer such questions is inside the company itself, and despite promises to the contrary, very little of that data gets shared with outsiders, even those who are trying to help us understand how these new services are affecting us.

Sign up for CJR 's daily email

Here’s more on YouTube, radicalization, and “automated propaganda”

With or without : Stanford PhD student Becca Lewis, an affiliate at Data & Society, argues that YouTube could remove its recommendation algorithm entirely and still be one of the largest sources of far-right propaganda and radicalization online. “The actual dynamics of propaganda on the platform are messier and more complicated than a single headline or technological feature can convey,” she says, and they show “how the problems are baked deeply into YouTube’s entire platform and business model,” which are based on complex human behavior that revolves around celebrity culture and community.

Radicalized Brazil : Virgilio Almeida’s research into the radicalization effects of YouTube’s recommendation algorithm formed part of the background for a New York Times piece on some of the cultural changes that led up to the election of Brazilian president Jair Bolsonaro, as well as research done by Harvard’s Berkman Klein Center. YouTube challenged the researchers’ methodology, and maintained that its internal data contradicted their findings, the Times said, “but the company declined the Times ’ requests for that data, as well as requests for certain statistics that would reveal whether or not the researchers’ findings were accurate.”

Algorithmic propaganda : Guillaume Chaslot was a programmer with YouTube who worked on the recommendation algorithm, and told CJR that he raised concerns about radicalization and disinformation at the time, but was told that the primary focus was to increase engagement time on the platform. “Total watch time was what we went for—there was very little effort put into quality,” Chaslot said. “All the things I proposed about ways to recommend quality were rejected.” He now runs a project called AlgoTransparency.org , which tracks YouTube’s recommendations, and he is also an advisor at the Center for Humane Technology.

Other notable stories:

ICYMI: Sleepwalking into 2020

Has America ever needed a media watchdog more than now? Help us by joining CJR today

Mathew Ingram is CJR’s chief digital writer. Previously, he was a senior writer with Fortune magazine. He has written about the intersection between media and technology since the earliest days of the commercial internet. His writing has been published in the Washington Post and the Financial Times as well as by Reuters and Bloomberg.