Welcome new subscribers! Today we celebrate The Interface’s second birthday. For two years now, we’ve done our best to bring you the day’s most important developments in technology and democracy. The next year, which will bring us a US presidential election, promises to be the most consequential yet. Thank you for reading it and sharing with your friends and coworkers. And if you’re in San Francisco and would like to meet us in person, there are still some tickets available for our first-ever live event on Tuesday, October 22nd: my conversation with the brilliant disinformation researcher Renee DiResta. I hope to see you there!

Last week, Facebook said it had changed its advertising policies to exempt politicians and political parties from rules banning misinformation. As a result, candidates are now free to lie in their ads, and some of them are already doing so, and I bet you can guess who one of them is!

When I wrote about this change last week, I argued in favor of this change. My rationale — not all of which made it into that first column — went as follows.

There is a long tradition of lying in American politics, much of which has taken place in advertising. See, for example, the history of direct mail campaigns, or of robocalls.

Lying is bad, but it’s good to know which politicians are liars.

A robust (if struggling!) media apparatus aggressively documents and describes these lies as part of its campaign coverage.

The debate about candidates’ positions and their relative truthfulness is an important part of the campaign and of a healthy democracy.

On balance, I would rather have these discussions taking place in public than deputize a for-profit corporation to preempt them.

I regret to say that my rationale satisfied no one, 24 people unsubscribed to this newsletter, and the debate continued raging into the weekend.

Elizabeth Warren, whose campaign has been energized by our report from earlier this month that Mark Zuckerberg intended to “go to the mat” to prevent Facebook from being broken up, seized upon the policy change and called the company’s bluff — buying an ad that said, erroneously, that Zuckerberg had endorsed Trump in the 2020 election. (The ad goes on to say that Warren is fibbing to make a point.) Here are Cecelia Kang and Thomas Kaplan in the New York Times:

In a series of tweets on Saturday, Ms. Warren, a senator from Massachusetts, said she had deliberately made an ad with lies because Facebook had previously allowed politicians to place ads with false claims. “We decided to see just how far it goes,” Ms. Warren wrote, calling Facebook a “disinformation-for-profit machine” and adding that Mr. Zuckerberg should be held accountable. Ms. Warren’s actions follow a brouhaha over Facebook and political ads in recent weeks. Mr. Trump’s campaign recently bought ads across social media that accused another Democratic presidential candidate, Joseph R. Biden Jr., of corruption in Ukraine. That ad, viewed more than five million times on Facebook, falsely said that Mr. Biden offered $1 billion to Ukrainian officials to remove a prosecutor who was overseeing an investigation of a company associated with Mr. Biden’s son Hunter Biden.

Then Facebook — just a just a few days after Zuckerberg told employees he would “try not to antagonize her further” — antagonized her further. A company Twitter account responded to the senator noting that various broadcast networks had aired the Trump-Biden ad “nearly 1,000 times.”

And then Warren asked what I thought was a pretty good question. She tweeted:

“You’re making my point here. It’s up to you whether you take money to promote lies. You can be in the disinformation-for-profit business, or you can hold yourself to some standards. In fact, those standards were in your policy. Why the change?”

I continue to think Facebook can make a good business case for accepting political ads with misinformation. And I think there’s a case that our politics are better when candidates have a wide latitude to speak freely, without intervention from private businesses.

At the same time, though — and the events of the past few days have driven this home for me — there might not be much of a moral case for Facebook’s policy here. Here’s why.

One, if Facebook accepts that politicians will lie in their ads on the site, then the company also has to accept that it will be a partner in spreading misinformation. (This is not a theoretical worry; the Trump-Biden ad was viewed more than 5 million times.) Given how much Facebook has invested in what it calls “platform integrity” — a coordinated effort to rid the site of misinformation — this policy is counterproductive and (for those who work on platform integrity) demoralizing.

Two, the platform has historically incentivized inflammatory speech, and permitting them in ads could mean that Facebook once again plays a key role in the outcome of the 2020 election. Charlie Warzel argues in the New York Times that given the Trump campaign’s propensity for telling outrageous lies, Facebook’s policy is a de facto thumb on the scale for Republicans. This is notable for lots of reasons, starting with the fact that the stated intent of the policy is to ensure that Facebook has less influence over political outcomes.

Three — and what Warren noted so sharply — is that Facebook’s policy puts it in the uncomfortable position of profiting from politicians’ lies. It doesn’t matter that political ads make up less than 5 percent of the company’s revenues — it’s now the kind of inconvenient truth that Facebook can expect to take a public-relations hit over it every time a politician’s lie goes viral.

Finally, Josh Constine added a fourth dimension to consider here, which is that Facebook’s sophisticated ad targeting capabilities could make an untruthful political ad even more pernicious there than, say, in a broadcast TV ad. Reach the right low-information voter with the right lie at scale, the argument goes, and you just might tip the country into full-blown idiocracy.

I find the collective arguments in this case … persuasive? I still would far rather citizens sort fact from fiction on their own, using the information that they gather from a free press. But I acknowledge that, for the most part, they don’t.

Time after time on tech platforms, we have seen how a posture of neutrality winds up benefiting the worst actors at the expense of everyone else. And there’s a real risk of that happening again here.

In the meantime, Facebook’s effort to avoid one trap has landed it another. It may have sidestepped lots of tricky questions about what is true and what is false in the political arena. But there are few ways in which we demonstrate our values more clearly than in what we will accept money to do. Facebook has now opened itself up to the legitimate criticism that it is spreading misinformation for profit. And with each new viral lie, I expect that criticism will only grow louder.

Bonus reading: impeachment ads are so clickbaity one company is using them to sell spices; Zuckerberg is having conservatives over for dinner and getting nothing of practical value out of it; Zuckerberg says of those dinners, “Meeting new people and hearing from a wide range of viewpoints is part of learning. If you haven’t tried it, I suggest you do!” Which is maybe the spiciest Zuckerberg response to a story about him that I can recall?

The Ratio

Today in news that could affect public perception of the big tech platforms.

Trending up: Pinterest used AI to reduce reported self-harm content by as much as 88%. The technology allows them to hide content that displays or encourages self-injury. This isn’t the first time Pinterest has led the way in terms of bold content moderation — back in February, I wrote about it’s decision to stop returning results for searches related to vaccinations. (Kyle Wiggers / VentureBeat)

Trending sideways: Apple was also caught using China-owned Tencent to check URLs for fraudulent behavior before iOS users visit them; the company says it’s only doing this in China; it’s complicated.

Trending down: Facebook’s reply to Warren got ratioed in the original sense of The Ratio; lots of people are mad about Zuckerberg having Tucker Carlson for dinner; they’re also dunking on his reponse to the story.

Trending down: Apple and Google have now both yanked apps from their respective app stores owing to pressure from the Chinese government. In Apple’s case that includes a crowdsourced law enforcement map that it first rejected, then let into the store, and then banned again.

Governing

⭐ Apple warned some Apple TV+ show developers not to portray China in a bad light. Here’s the story from Alex Kantrowitz and John Paczkowski at BuzzFeed:

In early 2018 as development on Apple’s slate of exclusive Apple TV+ programming was underway, the company’s leadership gave guidance to the creators of some of those shows to avoid portraying China in a poor light, BuzzFeed News has learned. Sources in position to know said the instruction was communicated by Eddy Cue, Apple’s SVP of internet software and services, and Morgan Wandell, its head of international content development. It was part of Apple’s ongoing efforts to remain in China’s good graces after a 2016 incident in which Beijing shut down Apple’s iBooks Store and iTunes Movies six months after they debuted in the country.

Protesters are trying to get Blizzard Entertainment’s popular video game Overwatch banned in China, using memes of the game’s Chinese hero Mei. They’re mad at Blizzard for suspending Hearthstone player Chung “blitzchung” Ng Wai for expressing support for Hong Kong. (Nicole Carpenter / Polygon)

Riot Games went the opposite route, telling League of Legends broadcasters to refrain from discussing “sensitive topics” on the air. (Makena Kelly / The Verge)

Previously, US companies doing business in China argued they could bring Western values to the country. Now, it seems China is bringing its authoritarian values to the United States. Farhad Manjoo argues it’s time for businesses to play hardball. (Farhad Manjoo / The New York Times)

The Chinese government built a back door into a propaganda app, allowing it to see users’ messages and photos, browse their contacts and Internet history, and activate an audio recorder inside the devices. The app is reportedly the most widely downloaded app in China, with more than 100 million users. (Anna Fifield / The Washington Post)

China also announced a new rule that requires residents to pass a facial recognition test in order to get internet access on their phone or computer. The government already requires people to have a valid ID in order to get a phone. Starting on December 1st, telecom companies will have to use facial recognition to test whether that ID is legitimate. (Nicole Hao / Epoch Times)

US politicians are using location data from smartphones to target voters more effectively ahead of the 2020 election. The information allows them to track and segment people based on the apps they’ve used and places they’ve been. (Think rallies: churches, or gun clubs). (Sam Schechner, Emily Glazer and Patience Haggin / The Wall Street Journal)

Facebook released a statement about the EU ruling that will allow countries all over the world to force the tech company to take down posts — even if the posts didn’t originate from that particular country. Facebook said the ruling “undermines the long-standing principle that one country does not have the right to impose its laws on another country.” (Facebook)

One of the people who helped get Facebook to suspend Israeli Prime Minister Netanyahu’s chatbot, known as Bibi-Bot, discusses why the suspension campaign succeeded. The bot was spreading misinformation about Arab voters. (Anat Ben-David / Medium)

Facebook fired a Chinese engineer who accused the company of mistreating foreign employees. News of Yi Yin’s firing spread on WeChat, and is now being widely covered by Chinese media. (Zheping Huang / Bloomberg)

Tech companies including Facebook, Twitter, Google, and Microsoft are trying to combat misinformation about the 2020 census. (David Uberti / Vice)

The German synagogue shooter’s Twitch video didn’t go viral on social media, thanks to an alliance between Facebook, Amazon, Twitter, Microsoft, and YouTube that formed in 2017. The Global Internet Forum to Counter Terrorism uses a shared database of 200,000 “hashes,” or digital fingerprints, to identify violent videos and propaganda. (David Uberti / Vice)

Trump joined Twitch as part of his re-election strategy. Bernie Sanders is already using the platform, but the president’s move is more notable given that Twitch is owned by Amazon, a reliable punching bag for Trump. (Julia Alexander / The Verge)

Google defended its contributions to climate change-denying think tanks, saying it’s not unusual for companies to contribute to organizations they don’t fully agree with. The Guardian reported the contributions were likely an attempt to influence conservative lawmakers and push a deregulation agenda. (Makena Kelly / The Verge)

GitHub CEO Nat Friedman had a tense meeting with employees after a leaked email revealed the company is renewing its contract with Immigration and Customs Enforcement. The news made some employees question how GitHub plans to interact with non-democratic countries like China. Friedman said the company’s position on China is “evolving.” (Colin Lecher / The Verge)

India’s National Crime Records Bureau is collecting bids from private companies to create the country’s first centralized facial recognition surveillance system. Facial recognition technology is already being tested and used at many airports, police stations, malls, and schools across the country. (Pranav Dixit / BuzzFeed)

Industry

⭐ Pinterest is rolling out a new feature that lets users review and edit their activity history and interests to give them more control of what they see. The Home Feed Tuner essentially allows users to help shape the Pinterest algorithm. Here’s Will Oremus at OneZero:

It’s a feature that Pinterest expects will reduce complaints and raise satisfaction among a small subset of power users. But it will do little to help the site expand, and could even reduce engagement for those who use it by limiting the information available to the algorithm. It’s the kind of trade-off the company says it’s willing to make, especially since early tests showed no significant drop-off in user activity. Other trade-offs are proving trickier, however, like how to understand users deeply enough to keep them coming back for more, without boring them, boxing them in, or creeping them out. “Users don’t want to be pigeonholed,” says Candice Morgan, the company’s head of inclusion and diversity. She commissioned a study earlier this year to understand how Pinterest could better serve users from backgrounds that the platform underrepresents. “They don’t want us to guess what they’re going to like based on their demography,” she adds.

Facebook has found few friends in its effort to tighten encryption across its messaging apps. Google and Apple, which rely on encryption in their own products, notably avoided voicing public support for Facebook’s stance. (Ashley Gold / The Information)

An Indiana woman posted a warning about a mass shooter at Marshall County’s annual Blueberry Festival in a private Facebook Group with over 5,000 members. It turned out to be a false alarm, but many people stayed home anyway, showing the impact of misinformation — especially when it comes from people we trust. (Bryan Pietsch / Reuters)

Instagram is finally going to let users DM on the desktop, according to a test spotted by reverse-engineering expert Jane Manchun Wong. Yes please! (Jane Manchun Wong / Twitter)

Visa, Mastercard, eBay, Stripe, and Mercado Pago have all withdrawn from the Libra Association, dealing a major blow to Facebook’s plans for a distributed, global cryptocurrency. The withdrawals leave Libra with no major US payment processor since PayPal left the association earlier this month. Libra did announce new board members today after a meeting in Geneva. (Russell Brandom / The Verge)

Amazon got to where it is today through ruthless efficiency and an intense work ethic. This profile examines how the company governs executives, employees, and contractors. (Charles Duhigg / The New Yorker)

As the CDC tries to control a rash of vaping-related lung injuries, YouTube is hosting dozens of videos that offer step-by-step instructions on how to make DIY THC vape oil. Some involve the use of potentially harmful chemicals and have been viewed millions of times. (Stephanie M. Lee and Dan Vergano / BuzzFeed)

Here’s a sharp PewDiePie profile that examines with the YouTuber’s rise to fame, why he doesn’t consider himself a white nationalist (or even a conservative), and what happened to the $50,000 he pledged to give the Anti-Defamation League and then retracted. See also its useful description of the insular culture of “inner YouTube.” (Kevin Roose / The New York Times)

Google grants for news organizations tend to be made in places where the company faces pressure from politicians, the public, and the press, says a report by the Campaign for Accountability. (Alex Kantrowitz / BuzzFeed)

Google’s four-person product inclusion team is trying to steer the company to make bias-free products and services. (Danielle Abril / Fortune)

Twitter released a new desktop app for users running the latest version of macOS, known as Catalina. The app was built with a macOS framework called Catalyst that allows developers to port their iPad apps over to the Mac with less work than before. (Sam Byford / The Verge)

A Japanese man was arrested for reportedly stalking a pop star and attacking and groping her at her home. He allegedly found her by studying photos she posted on social media, observing a train station reflected in her eyes, finding that train station using Google Street View, waiting for her at the train station, and following her home. (Jay Peters / The Verge)

And finally...

Here’s your hashtag of year.

Talk to us

Send us tips, comments, questions, and Elizabeth Warren clapbacks: casey@theverge.com and zoe@theverge.com.