The Interface is a daily column and newsletter about the intersection of social media and democracy. Subscribe here .

I.

On Thursday, Facebook co-founder Chris Hughes said the company should be broken up, and required to spin off WhatsApp and Instagram. Increasingly, Democratic presidential candidates agree with him. Sen. Elizabeth Warren had already issued her call to break up big tech companies; yesterday, Sen. Kamala Harris said “we have to seriously take a look” at breaking up Facebook. Today, Joe Biden said a Big Tech breakup is “something we should take a really hard look at.”

And Facebook, for its part, finally responded at length. First, Nick Clegg, the company’s head of communications and policy, had an op-ed in the New York Times. He argued that the company has plenty of meaningful competition, and that breaking it up would only worsen problems around unchecked free speech and data protection.

Big in itself isn’t bad. Success should not be penalized. Our success has given billions of people around the globe access to new ways of communicating with one another. Earning money from ads means we can provide those tools to people for free. Facebook shouldn’t be broken up — but it does need to be held to account. Anyone worried about the challenges we face in an online world should look at getting the rules of the internet right, not dismantling successful American companies.

Mark Zuckerberg struck a similar note in an interview with reporters in France, where he had traveled to meet with President Emmanuel Macron. “When I read what he wrote, my main reaction was that what he’s proposing that we do isn’t going to do anything to help solve those issues. So I think that if what you care about is democracy and elections, then you want a company like us to be able to invest billions of dollars per year like we are in building up really advanced tools to fight election interference.”

In other words, only a company of Facebook’s size can afford to address the problems caused by a platform of Facebook’s size.

A counter-argument might be that the negative externalities of platforms scale along with their size. A web forum where a few dozen people meet to espouse white supremacist ideals might offend us, but if the mechanics of the platform do not allow it to recruit others, it’s arguably just free speech. It’s hard to see how the government could justify shutting it down, although the forum’s web hosting provider would retain the right to do so.

On the other hand, a platform with billions of people that steers people into fringe conspiracy groups via its recommendation algorithms arguably poses a different set of problems to the world. The platform’s viral sharing mechanics, combined with its scale, makes it complicit in the spread of misinformation, terrorism and hate speech.

I understand that Facebook has more money to spend on this problem than the average web forum. But I don’t understand how Facebook can credibly divorce the scale of its user base from the scale of its consequences.

A fair question to ask is what we might expect if Facebook no longer funded platform integrity efforts on WhatsApp and Instagram (and no longer had revenues from those platforms to fund its own work.) If the separated platforms had 1.5 billion monthly users apiece, would they be able to protect them effectively from bad actors?

Facebook says no, but then Facebook has historically been bad at predicting how people will abuse the service. To me it seems just as possible that the separate companies will compete on platform integrity, generating useful new solutions for the others to shamelessly copy. Facebook has been slow and reactive when it comes to security and data protection efforts; it seems possible another company would act more nimbly.

In any case, Facebook has another approach here that it prefers: France’s. After six months in which French regulators worked inside the company “monitoring its policies,” which I would like someone to please make a multi-camera sitcom about, the government released a 33-page report with recommendations for how regulation should work. The report “recommends that French authorities should have more access to Facebook’s algorithms and greater scope to audit the company’s internal policies against hate speech,” Mathieu Rosemain and Gwénaëlle Barzic report. And Zuckerberg likes it:

“If more countries can follow the lead of what your government has done here, that will likely end up being a more positive outcome for the world in my view than some of the alternatives,” Zuckerberg told reporters at Facebook’s Paris office after the meeting at the Elysee palace. “We need new rules for the internet that will spell out the responsibilities of companies and those of governments,” he told France 2 television in an interview. “That is why we want to work with the team of President Macron. We need a public process.”

Will Facebook be broken up, or will it simply submit to some new monitoring regime? (The company is lately very happy to submit to monitoring regimes.) The latter seems more likely to be, but I’ve been somewhat taken aback at how quickly the former has become a litmus test for Democratic presidential candidates. If we’re all still talking about this when the primary debates start, I’d say the prospect of a breakup would be a bit more likely.

II.

Hughes’s op-ed kicked off a media tour that included, among other things, a campaign-ad-style YouTube video, an appearance on The Daily, and an interview with Kara Swisher. I was surprised at how like a politician Hughes sounded on YouTube, and how squirmy and uncomfortable he sounded on podcasts.

One thing that became clear to me on Thursday is that lots of current and former Facebook employees don’t think very much of Hughes. An important reason why can be found in his interview with Swisher:

Did you miss doing that, leaving Facebook? You know, I had mixed feelings about it, but my experience was really different than Mark’s and Dustin’s. I mean, Facebook was a mission in and of itself for Mark, and for me it was a company that I enjoyed being a part of, growing. I learned a lot, it was exciting, there were all kinds of challenges, but it was clear to me early on that Facebook was not my life’s work.

Hughes doesn’t believe in Facebook’s mission today — but he didn’t really believe in it then, either. Externally, this doesn’t much matter — he’s still a co-founder of Facebook, and his opinion will be duly noted whenever we write about the case for breaking up the company. But internally, Hughes will continue to be dismissed as a historical footnote.

And speaking of co-founders: In my Thursday newsletter on the subject, I noted that another co-founder of Facebook, Dustin Moskovitz, had donated to Color of Change, which is trying to persuade Facebook shareholders to vote against Zuckerberg being re-nominated to the company’s board. To me the donation was telling, given that Moskovitz has said almost nothing about Facebook publicly in the past couple of years. But on Twitter, Moskovitz told me that the donation was intended only to help Democrats in the 2016 election, and that he should not be characterized as a Facebook critic.

When I asked what his argument against a breakup was, he said: “If the goal is to improve democracy we should break up Fox and Sinclair first.” He later deleted the tweet.

The Trauma Floor

Three months after I reported on the working conditions at Facebook’s content moderation facilities, and with multiple class-action lawsuits brewing, the company said today it would increase the minimum pay by 20 percent and take new steps to monitor and improve workers’ mental health:

In February, The Verge reported that Facebook contractors in Phoenix are suffering from long-term mental health issues after working as content moderators. Their jobs require them to view a steady stream of violent and disturbing content, and several moderators told us they continued to struggle with PTSD-like symptoms. Other moderators told us that the work had made them more likely to believe in the fringe conspiracy theories that they encountered each day at work. In response, Facebook said it would now require its vendors to provide on-site counseling during all hours of operation, rather than only during the day shift. It will also begin surveying contractors about their mental health twice a year “and use the results to shape our programs and practices,” the company said.

This is great news.

Democracy

Supreme Court says Apple will have to face App Store monopoly lawsuit

I often say that we have no meaningful antitrust regulation in this country. But this? This is meaningful, and I expect it will spur pro-competition moves in the App Store before it’s all over. Adi Robertson reports:

The Supreme Court is letting an antitrust lawsuit against Apple proceed, and it’s rejected Apple’s argument that iOS App Store users aren’t really its customers. The Supreme Court upheld the Ninth Circuit Court of Appeals’ decision in Apple v. Pepper, agreeing in a 5-4 decision that Apple app buyers could sue the company for allegedly driving up prices. “Apple’s line-drawing does not make a lot of sense, other than as a way to gerrymander Apple out of this and similar lawsuits,” wrote Justice Brett Kavanaugh. Apple had claimed that iOS users were technically buying apps from developers, while developers themselves were Apple’s App Store customers. According to an earlier legal doctrine known as Illinois Brick, “indirect purchasers” of a product don’t have the standing to file antitrust cases. But in today’s decision, the Supreme Court determined that this logic doesn’t apply to Apple.

Exclusive: India orders anti-trust probe of Google for alleged Android abuse - sources

And speaking of meaningful antitrust regulation, here’s one to watch from last week:

India’s antitrust watchdog has ordered an investigation into Alphabet Inc’s unit Google for allegedly abusing the dominant position of its popular Android mobile operating system to block rivals, two sources aware of the matter told Reuters.

Russia Is Targeting Europe’s Elections. So Are Far-Right Copycats.

Matt Apuzzo and Adam Satariano report that Russian disinformation operations are in full swing abroad:

Less than two weeks before pivotal elections for the European Parliament, a constellation of websites and social media accounts linked to Russia or far-right groups is spreading disinformation, encouraging discord and amplifying distrust in the centrist parties that have governed for decades. European Union investigators, academics and advocacy groups say the new disinformation efforts share many of the same digital fingerprints or tactics used in previous Russian attacks, including the Kremlin’s interference in the 2016 U.S. presidential campaign.

Your 5G Phone Won’t Hurt You. But Russia Wants You to Think Otherwise.

William J. Broad reports that Kremlin mouthpiece RT America has mounted a campaign on YouTube to promote the idea that next-generation 5G technology will cause health problems:

RT America aired its first program assailing 5G’s health impacts last May, its only one in 2018. Already this year, it has run seven. The most recent, on April 14, reported that children exposed to signals from 5G cellphone towers would suffer cancer, nosebleeds and learning disabilities.

YouTube Has Downgraded Carl Benjamin’s Sargon Of Akkad Account After He Talked About Raping A British MP

Good:

YouTube has demonetised Carl Benjamin’s Sargon of Akkad video channel after the political commentator turned UKIP candidate made comments about raping a woman MP. Earlier this week, West Midlands police announced an investigation into Benjamin’s remarks made in a YouTube video about Labour MP Jess Phillips, where the UKIP European election candidate questioned whether he’d rape her before concluding “nobody’s got that much beer”.

Turkish watchdog says it fines Facebook $271,000 for data breach

And some people say FTC fines are cheap!

Elsewhere

Fear-based social media Nextdoor, Citizen, Amazon’s Neighbors is getting more popular

At a time when folks like me are complaining about the lack of competition among social networks, there’s a surge in services devoted to making you terrified of your immediate surroundings, Rani Molla reports:

Violent crime in the US is at its lowest rate in decades. But you wouldn’t know that from a crop of increasingly popular social media apps that are forming around crime. Apps like Nextdoor, Citizen, and Amazon Ring’s Neighbors — all of which allow users to view local crime in real time and discuss it with people nearby — are some of the most downloaded social and news apps in the US, according to rankings from the App Store and Google Play.

Swatting Attacks Increase Security Concerns Across Silicon Valley

Robert McMillan and Jeff Horwitz write about a disturbing rise in attacks designed to bring armed police presences to the homes of Silicon Valley executives under false pretenses. To my mind, there’s a good case for considering this attempted murder:

The swatting attacks early this year came weeks after someone posted to an online message board personal details—including home addresses and names of family members—of some of the biggest figures in Silicon Valley. The list, which was viewed by The Wall Street Journal, also included information on journalists, celebrities and government officials. At least one of the celebrities included in the list was subsequently swatted. The anonymous posting with the list, which has since been taken down, was hosted on the 8chan website, which describes itself as “The darkest reaches of the internet.” Long a hotbed for anti-Muslim and anti-Semitic material, 8chan was used to announce the mosque terrorist attack in New Zealand in March and a synagogue shooting in Poway, Calif., a month later.

How Money Flows From Amazon to Racist Troll Haven 8chan

Speaking of 8chan, it’s largely funded by affiliate fees from Amazon, Judd Legum reports.

’Fake News Victims’ Meet With Twitter and Facebook

Issie Lapowsky writes about an effort from a nonprofit to introduce victims of internet hoaxes to the platforms that promote those hoaxes:

The discussions organized by Avaaz served as a counterpoint to all that pressure, as individual victims of online harassment campaigns came forward to tell tech companies exactly how they’ve been hurt by the hate and hoaxes that have festered on their platforms. “Our job as advocates is to make them stop for a minute and think about the implications of not acting fast enough,” says Oscar Soria, a senior campaigner with Avaaz.

Twitter bug disclosed some users’ location data to an unnamed partner

Twitter announced a bug today but wouldn’t say how many people it affected or which partner it leaked data to, so I’m just adding it to a file called Things To Keep In Mind The Next Time Twitter Congratulates Itself for Its ‘Transparency.“

This doctor posted online in favor of immunization. Then vaccine opponents targeted her

Liz Kowalczyk reports that physician rating sites are being bombarded by negative reviews from anti-vaccination zealots:

More doctors say they are being attacked online for recommending parents vaccinate their children as part of a coordinated effort by anti-vaccine groups. Fictitious patient reviews are just one tactic; vaccine opponents have also deluged Facebook and Instagram accounts of doctors and practices, medical professionals said. Physician rating sites are ideal targets for anti-vaccine activists because they’re often the first place online where prospective patients get information on doctors, and there are gaps in the verification process. The companies that host online doctor ratings generally do not guarantee a commenter is actually a patient of that doctor. Tello said that she repeatedly e-mailed the companies, but that Vitals and Healthgrades only removed the suspect reviews after she involved a lawyer. She never got through to anyone at Google, she said.

Discord, Slack for gamers, tops 250 million registered users

Popular among gamers and white supremacists, Discord now has 56 million monthly users, the company said.

Launches

Spotify is testing its own version of Stories called ‘Storyline’

Move over, Ping!

Spotify is testing its own version of Stories — the sharing format popularized by social apps like Snapchat and Instagram that has since made its way to other apps like Facebook, YouTube, WhatsApp and others. In Spotify’s case, it’s not called “Stories” but rather “Storyline,” and the focus is on allowing artists to share their own insights, inspiration, details about their creative process or other meanings behind the music. This is very much similar to what Spotify’s “Behind the Lyrics” feature today offers. But instead of pop-up cards that load in time with the music, Spotify Storyline is very much a Stories-like experience, where users tap through the different screens at their own pace, and where horizontal lines at the top indicate how many screens still await them ahead.

Takes

Friend portability is the must-have Facebook regulation

Josh Constine says making it easier for new social networks to bootstrap off of Facebook’s friend graph will restore the competitive balance. (Adam Mosseri responds.)

In other words, the government should pass regulations forcing Facebook to let you export your friend list to other social networks in a privacy-safe way. This would allow you to connect with or follow those people elsewhere so you could leave Facebook without losing touch with your friends. The increased threat of people ditching Facebook for competitors would create a much stronger incentive to protect users and society.

Facebook Algorithms Make It Harder to Catch Extremists

Berhnard Warner says social platforms need to preserve rather than delete terrorist videos:

Designed to identify and take down content posted by “extremists”—“extremists” as defined by software engineers—machine-learning software has become a potent catch-and-kill tool to keep the world’s largest social networks remarkably more sanitized places than they were just a year ago. Google and Facebook break out the numbers in their quarterly transparency reports. YouTube pulled 33 million videos off its network in 2018—roughly 90,000 a day. Of the videos removed after automated systems flagged them, 73 percent were removed so fast that no community members ever saw them. Meanwhile, Facebook removed 15 million pieces of content it deemed “terrorist propaganda” from October 2017 to September 2018. In the third quarter of 2018, machines performed 99.5 percent of Facebook’s “terrorist content” takedowns. Just 0.5 percent of the purged material was reported by users first. Those statistics are deeply troubling to open-source investigators, who complain that the machine-learning tools are black boxes. Few people, if any, in the human-rights world know how they’re programmed. Are these AI-powered vacuum cleaners able to discern that a video from Syria, Yemen, or Libya might be a valuable piece of evidence, something someone risked his or her life to post, and therefore worth preserving? YouTube, for one, says it’s working with human-rights experts to fine-tune its take-down procedures. But deeper discussions about the technology involved are rare.

Instagram Is Trying to Curb Bullying. First, It Needs to Define Bullying.

Kevin Roose describes the challenge of keeping up with teenage innovations in bullying on social platforms:

Some teenagers reported feeling bullied when their exes showed off new boyfriends or girlfriends in a menacing way — for example, by tagging the jilted ex in the photo to trigger a notification and rub in the fact that they had moved on to someone new. Instagram came up with a name for this category of bullying — “betrayals” — and started training an algorithm to detect it.

And finally ...

‘Old Town Road’: See How Memes and Controversy Took Lil Nas X to No. 1

My favorite internet story of the year so far is that of “Old Town Road,” Lil Nas X’s too-hot-for-Billboard trap-country masterpiece. Even if you already know the story’s broad outlines, I bet you’ll still love Joe Coscarelli’s video interviews with all of the principal characters in the story, who are uniformly delightful. (Shout out to Billy Ray Cyrus.)

Anyway, this story continues to delight me at every turn.

Talk to me

Send me tips, comments, questions, and your opinion of French internet regulations: casey@theverge.com