The Interface is a daily column and newsletter about the intersection of social media and democracy. Subscribe here .

YouTube’s difficult summer rolls on. Recent stories have revealed that the company might be accidentally generating video playlists for pedophiles; the Federal Trade Commission is investigating the site’s targeting of ads toward children; and the New York Times linked the site’s popularity to the rise of right-wing extremism in Brazil. (Also: everything linked at the top this column.)

But nothing has defined YouTube’s summer more than the conflict between Vox.com video host Carlos Maza and right-wing pundit Steven Crowder. The conflict — over whether someone with millions of followers should be allowed to repeatedly call another YouTuber a “lispy queer” — highlighted the gap between what YouTube’s community guidelines say is allowed, and what is actually allowed. (Crowder got away with almost everything; his newfound fame almost certainly compensated for any lost revenue from his channel being demonetized.)

Today, in her quarterly letter to YouTubers, CEO Susan Wojcicki took the occasion to defend the idea of a website that lets almost anyone upload a video — even offensive ones. She writes:

A commitment to openness is not easy. It sometimes means leaving up content that is outside the mainstream, controversial or even offensive. But I believe that hearing a broad range of perspectives ultimately makes us a stronger and more informed society, even if we disagree with some of those views. A large part of how we protect this openness is not just guidelines that allow for diversity of speech, but the steps that we’re taking to ensure a responsible community. I’ve said a number of times this year that this is my number one priority. A responsible approach toward managing what’s on our platform protects our users and creators like you. It also means we can continue to foster all the good that comes from an open platform.

The letter is full of links to the good YouTubers — the ones who make silly, educational, kind-hearted videos for their rabid fan bases. All of this is well and good, even if seems to me to sidestep the central issue at the heart of the debate, which was — what counts as harassment?

The site’s community guidelines still say “Content or behavior intended to maliciously harass, threaten, or bully others is not allowed on YouTube.” There is no discussion of how the context surrounding a video could modify that statement. But in discussing the Maza affair, YouTube said that “context” is the determining factor in whether a harassing video stays up — and that the context of the Crowder videos is essentially media criticism, and therefore allowed.

To my mind, the YouTube debate we should be having isn’t one about open platforms versus closed ones. Rather, it’s about the policies a company advertises versus the ones they enforce.

Meanwhile, here’s an intriguing paper from researchers at Brazil’s Universidade Federal de Minas Gerais and Switzerland’s École polytechnique fédérale de Lausanne that documents another aspect of YouTube’s openness: the way it has attracted a large audience for conservative thinkers. The paper, “Auditing Radicalization Pathways on YouTube,” attempts to measure the site’s ability to nurture extremism by tracking commenters over 11 years. (Note that the paper has not yet been peer-reviewed.)

Researchers grouped conservative YouTubers into three admittedly fuzzy categories of escalating polarity: the “intellectual dark web,” the “alt-lite,” and the “alt-right.” (They built the categories using information from the Anti-Defamation League and Data & Society as well as their own research.) They found that people who began their time on YouTube by commenting on less extreme channels come to comment on more extreme channels over time — evidence, they say, of a “radicalization pipeline.” Here’s some of their data:

Consider, for example, users who in 2006 − 2012 commented only on I.D.W. or Alt-lite content (227, 945 users), as shown in the subplot in the first column and the first row. By 2018, around 10% were lightly infected, and roughly 4% severely or mildly so —which amounts to more than 9k users in total. From the ones who in 2017 commented only in Alt-lite or I.D.W. videos (1, 253, 751 users), as shown in the last column of the first row, approximately 12% of them became infected —more than 60k users altogether.

There are some obvious limits here, as the authors acknowledge. The fact that someone comments on a set of extremist videos does not necessarily tell us that he himself has become an extremist. And yet the data does seem to suggest that YouTube’s open platform is nudging thousands of people rightward over time — a notable fact in a time of rising extremist violence. (Obligatory disclaimer here that there are many other media forces pushing people to the right, including conservative talk radio and cable news, and some of them are likely more effective in this regard than YouTube.)

In her letter, Wojcicki pledges to remove extremist content more effectively over time. She also reiterates a pledge to update the site’s policy for creator-on-creator harassment. In the meantime, I couldn’t help but notice the No.1 creator in the Brazilian researchers’ taxonomy of “alt-light” creators. At 727 million views, he dwarfed the subscriber count of his closest competitor. It was Steven Crowder, of course, and I couldn’t help but wonder how far his malign influence had spread beyond Carlos Maza.

YouTube responds: “While we welcome external research, this study doesn’t reflect changes as a result of our hate speech policy and recommendations updates and we strongly disagree with the methodology, data and, most importantly, the conclusions made in this new research,” a spokesman told me.

Update, August 29th: Added comment from YouTube.

Governing

⭐ China uses LinkedIn to recruit spies abroad. Edward Wong reports on how the scheme works at the New York Times:

Foreign agents are exploiting social media to try to recruit assets, with LinkedIn as a prime hunting ground, Western counterintelligence officials say. Intelligence agencies in the United States, Britain, Germany and France have issued warnings about foreign agents approaching thousands of users on the site. Chinese spies are the most active, officials say. The use of social media by Chinese government operatives for what American officials and executives call nefarious purposes has drawn heightened scrutiny in recent weeks. Facebook, Twitter and YouTube said they deleted accounts that had spread disinformation about the Hong Kong pro-democracy protests. Twitter alone said it removed nearly 1,000 accounts.

The US government is worried that hackers are targeting voter registration databases ahead of the 2020 election. A ransomware attack is feared. (Christopher Bing / Reuters)

European Union regulators have opened an inquiry into Google Jobs. I think it’s time I built an antitrust inquiry tracker. (Foo Yun Chee / Reuters)

Ex-Facebook chief security officer Alex Stamos stops by The Vergecast to talk about whether social platforms are ready for 2020. “Instagram has some of the same problems Twitter has in that you can have a pseudo anonymous identity on Instagram,” Stamos told Nilay Patel. “The fact that Instagram is mostly images give some benefit, but not a ton. As you know, the Russian troll factories have professional meme farms.”

Facebook launched “local alerts” to help governments communicate with users in emergencies. The feature has been tested in more than 300 cities to date. (Arriana McLymore / Reuters)

In an op-ed, Sen. Bernie Sanders says that as president he would take stronger antitrust action against Facebook and Google. He argues the companies have been bad for the US journalism industry and democracy in general.

Ben Thompson laments what he calls “privacy hysteria” and calls for a more even-keeled discussion of the benefits and drawbacks of data sharing and collection.

Here’s a somewhat esoteric bur provocative paper arguing that platforms enforce their speech rules through probabilities. It turns out to be a useful lens for considering various policy trade-offs. (Mike Ananny / Knight First Amendment Institute)

Industry

⭐ Funders are recommending that Social Science Research Council end the project if Facebook doesn’t share the data it promised with researchers by September 30th. The Social Media and Democracy Research Grants program was an effort to better understand the relationship between Facebook and governing, but Facebook has delayed the project indefinitely.

Chinese teens are shunning WeChat in favor of Douyin (the Chinese version of TikTok, also owned by ByteDance) and the venerable QQ. (South China Morning Post)

Libra launched a bug bounty program in case you find a bug in it, such as that its success could destabilize the existing political order.

People keep making audio deepfakes of Jordan Peterson.

Yelp answered one of the longest-running questions in technology — what does the Yelp product team do? — by introducing its first redesign in historical memory. And it has personalized recommendations, which usually wreak some kind of unintended havoc on the world. Stay tuned!

And finally ...

Zuckerberg emojis

Recently, Michail Rybakov asked a question we have all asked ourselves a hundred times: what would it look like if you recreated Facebook’s reaction emoji using a photo of Mark Zuckerberg and a disembodied neural network? But unlike most of us, Rybakov actually went through with it.

The results are (1) terrifying and (2) now available as a Telegram sticker pack.

Talk to me

Send me tips, comments, questions, and controversial videos that should be left up: casey@theverge.com.