Francesca Tripodi, a media scholar at James Madison University, has studied how right-wing conspiracy theorists perpetuate false ideas online. Essentially, they find unfilled rabbit holes and then create content to fill them. “When there is limited or no metadata matching a particular topic,” she told a Senate committee in April, “it is easy to coördinate around keywords to guarantee the kind of information Google will return.” Political provocateurs can take advantage of data vacuums to increase the likelihood that legitimate news clips will be followed by their videos. And, because controversial or outlandish videos tend to be riveting, even for those who dislike them, they can register as “engaging” to a recommendation system, which would surface them more often. The many automated systems within a social platform can be co-opted and made to work at cross purposes.

Technological solutions are appealing, in part, because they are relatively unobtrusive. Programmers like the idea of solving thorny problems elegantly, behind the scenes. For users, meanwhile, the value of social-media platforms lies partly in their appearance of democratic openness. It’s nice to imagine that the content is made by the people, for the people, and that popularity flows from the grass roots.

In fact, the apparent democratic neutrality of social-media platforms has always been shaped by algorithms and managers. In its early days, YouTube staffers often cultivated popularity by hand, choosing trending videos to highlight on its home page; if the site gave a leg up to a promising YouTuber, that YouTuber’s audience grew. By spotlighting its most appealing users, the platform attracted new ones. It also shaped its identity: by featuring some kinds of content more than others, the company showed YouTubers what kind of videos it was willing to boost. “They had to be super family friendly, not copyright-infringing, and, at the same time, compelling,” Schaffer recalled, of the highlighted videos.

Today, YouTube employs scores of “partner managers,” who actively court and promote celebrities, musicians, and gamers—meeting with individual video producers to answer questions about how they can reach bigger audiences, giving them early access to new platform features, and inviting them to workshops where they can network with other successful YouTubers. Since 2016, meanwhile, it has begun paying socially conscious YouTubers to create videos about politically charged subjects, through a program called Creators for Change. “In this instance, it’s a social-impact group,” Paul Marvucic, a YouTube marketing manager, explained. “We’re saying, ‘We really believe in what you guys are saying, and it’s very core to our values.’ ”

The question of YouTube’s values—what they are, whether it should have them, how it should uphold them—is fraught. In December of last year, Sundar Pichai, the C.E.O. of Google, went before Congress and faced questions about social media’s influence on politics. Democrats complained that YouTube videos promoted white supremacy and right-wing extremism; Republicans, in turn, worried that the site might be “biased” against them, and that innocent videos might be labelled as hate speech merely for containing conservative views. “It’s really important to me that we approach our work in an unbiased way,” Pichai said.

And yet the Creators for Change program requires YouTube to embrace certain kinds of ideological commitments. This past fall, for an audience of high-school and college students, YouTube staged a Creators for Change event in the Economic and Social Council chamber at the United Nations. The occasion marked the seventieth anniversary of the Universal Declaration of Human Rights, and five “ambassadors” from the program joined Craig Mokhiber, the director of the New York office of the U.N. High Commissioner for Human Rights, onstage. “The U.N. is not just a conference center that convenes to hear any perspective offered by any person on any issue,” Mokhiber said. Instead, he argued, it represents one side in a conflict of ideas. In one corner are universal rights to housing, health care, education, food, and safety; in the other are the ideologies espoused by Islamophobes, homophobes, anti-Semites, sexists, ethno-nationalists, white supremacists, and neo-Nazis. In his view, YouTube needed to pick a side. He urged the YouTubers onstage to take the ideals represented by the U.N. and “amplify” them in their videos. “We’re in the middle of a struggle that will determine, in our lifetime, whether human dignity will be advanced or crushed, for us and for future generations,” he said.

Last year, YouTube paid forty-seven ambassadors to produce socially conscious videos and attend workshops. The program’s budget, of around five million dollars—it also helps fund school programs designed to improve students’ critical-thinking skills when they are confronted with emotionally charged videos—is a tiny sum compared to the hundreds of millions that the company reportedly spends on YouTube Originals, its entertainment-production arm. Still, one YouTube representative told me, “We saw hundreds of millions of views on ambassadors’ videos last year—hundreds of thousands of hours of watch time.” Most people encountered the Creators for Change clips as automated advertisements before other videos.

The Mumbai-based comedian Prajakta Koli, known on YouTube as MostlySane, sat beside Mokhiber in the U.N. chamber. Around four million people follow her channel. Her videos usually riff on the irritating people whom she encounters in her college cafeteria or on the pitfalls of dating foreigners. “No Offence,” a music video that she screened at the Creators for Change event, is different. As it begins, Koli slouches in her pajamas on the couch, watching a homophobe, a misogynist, and an Internet troll—all played by her—rant on fictional news shows. A minute later, she dons boxing gloves and takes on each of them in a rap battle. After the screening, Koli said that she had already begun taking on weighty subjects, such as divorce and body shaming, on her own. But it helped that YouTube had footed the production and marketing costs for “No Offence,” which were substantial. The video is now her most watched, with twelve million views.

On a channel called AsapScience, Gregory Brown, a former high-school teacher, and his boyfriend, Mitchell Moffit, make animated clips about science that affects their viewers’ everyday lives; their most successful videos address topics such as the science of coffee or masturbation. They used their Creators for Change dollars to produce a video about the scientifically measurable effects of racism, featuring the Black Lives Matter activist DeRay Mckesson. While the average AsapScience video takes a week to make, the video about racism had taken seven or eight months: the level of bad faith and misinformation surrounding the topic, Brown said, demanded extra precision. “You need to explain the study, explain the parameters, and explain the result so that people can’t argue against it,” he said. “And that doesn’t make the video as interesting, and that’s a challenge.” (Toxic content proliferates, in part, because it is comparatively easy and cheap to make; it can shirk the burden of being true.)

YouTube hopes that Creators for Change will have a role-model effect. The virality of YouTube videos has long been driven by imitation: in the site’s early days, clips such as “Crazy frog brothers” and “David After Dentist” led fans and parodists to reënact their every move. When it comes to political videos, imitation has cut both ways. The perceived popularity of conspiracy videos may have led some YouTubers to make similar clips; conversely, many Creators for Change ambassadors cite other progressive YouTubers as inspirations. (Prajakta Koli based her sketches on those of Lilly Singh, a sketch-comedy YouTuber who has also spoken at the United Nations.) In theory, even just broadcasting the idea that YouTube will reward social-justice content with production dollars and free marketing might encourage a proliferation of videos that denounce hate speech.

And yet, on a platform like YouTube, there are reasons to be skeptical about the potential of what experts call “counterspeech.” Libby Hemphill, a computer-science professor at the University of Michigan’s Center for Social Media Responsibility, studies how different kinds of conversations, from politics to TV criticism, unfold across social media; she also prototypes A.I. tools for rooting out toxic content. “If we frame hate speech or toxicity as a free-speech issue, then the answer is often counterspeech,” she explained. (A misleading video about race and science might be “countered” by the video made by AsapScience.) But, to be effective, counterspeech must be heard. “Recommendation engines don’t just surface content that they think we’ll want to engage with—they also actively hide content that is not what we have actively sought,” Hemphill said. “Our incidental exposure to stuff that we don’t know that we should see is really low.” It may not be enough, in short, to sponsor good content; people who don’t go looking for it must see it.