The Interface is a daily column and newsletter about the intersection of social media and democracy. Subscribe here .

On Monday, a left-leaning civil rights audit of Facebook urged the company to expand its ban on white nationalist content by purging additional words, phrases, and symbols from the social network. The audit had been announced a year ago to address allegations of bias on Facebook, and the company has now offered two updates about the steps it has taken in response to auditors’ findings.

But the civil rights audit represents only half of Facebook’s efforts to seek independent review of its potential biases, and the other one has largely gone missing. On May 1st, 2018 — the same day it announced the civil rights audit — the company said it had formed a “conservative advising partnership.” As Sara Fischer reported at the time in Axios:

The conservative bias advising partnership will be led by former Arizona Republican Sen. Jon Kyl, along with his team at Covington and Burling, a Washington law firm. Kyl will examine concerns about alleged liberal bias on Facebook, internally and on its services. They will get feedback directly from conservative groups and advise Facebook on the best way to work with these groups moving forward. The Heritage Foundation, a conservative public policy think tank, will convene meetings on these issues with Facebook executives.

Then Kyl was appointed to serve out the remainder of John McCain’s term, and work slowed. His committee has released no public report, and made no recommendations. Here’s what a Facebook spokesman told me about the Kyl project today:

Senator Kyl and his team have talked to over 130 of the nation’s leading conservative groups and individuals to see how our policies are affecting them and their communities. The team is now meeting with people from Facebook’s policy and product teams to gain a better understanding of Facebook’s internal and external policies as well as our products and services.

It’s unclear when this team’s report might be made available. But even if it largely exonerated Facebook, would anyone in its intended audience believe it?

Before you answer, read Tony Romm in Tuesday’s Washington Post, reporting on an upcoming “social media summit” at the White House:

The president’s top aides so far have said their scheduled, July 11 event aims to assemble “digital leaders” to discuss the “opportunities and challenges of today’s online environment.” In doing so, though, the White House quietly has invited tech’s top conservative critics in politics and media, The Post has learned, some of whom say the solution to Silicon Valley’s perceived political bias is to break up the tech giants or more aggressively regulate them. [...] Trump repeated his criticisms about social media companies in an interview on Fox News that aired Monday, telling host Tucker Carlson that Facebook, Google and Twitter were “against me” before suggesting the administration might “take action” against them.

There continues to be no evidence that social networks are routinely suppressing mainstream conservative viewpoints. (If anything, they’re promoting more conservative viewpoints — along with more liberal ones.) But the idea that social networks lean liberal has proven to be wildly popular with conservative audiences, leading to a series of theatrical Congressional hearings in which lawmakers wring their hands about the liberal voting records of Silicon Valley tech workers, and raise the specter that mystery algorithms are quashing free speech.

This gets easier to do as the definition of “bias” expands to include any negative outcome that anyone ever experiences on social media, as I wrote here in May. And the Trump administration has worked to accelerate this process, most notably with its since-discontinued effort to collect people’s complaints about social networks via an online form.

Conservative lawmakers appear to be so invested in the idea that they’re being discriminated against that it’s hard to imagine what Jon Kyl could tell them that would change their minds. If anything, extending the ban on white nationalist terminology will only reinforce the idea that Facebook is suppressing certain forms of political speech. (Because ... it actually is, in this narrow instance.)

Perhaps the conservative advisory group will emerge with some smart compromises that will reassure constituents that Facebook engineers aren’t working to tip the scales of political discourse. But recent history suggests that popular falsehoods about Facebook are nearly impossible to kill. “Facebook sells your data” is one; “Facebook is listening to your phone” is another. “Facebook censors conservatives” seems likely to join that depressing canon of myths, and it’s not clear what anyone can do about it.

Pushback

In yesterday’s edition I argued that Facebook’s move to test a dedicated hate speech queue in the hopes that it led to better moderation outcomes could create new mental health burdens for moderators.

A Facebook spokesperson offered a few notes:

Moderators volunteered for the test, and can opt out.

Most of the volunteers have chosen not to opt out.

Moderators get better access to Facebook’s hate policy experts, which can help them improve their scores.

They also get continuous training and attend Q&A sessions about grey areas in the policy.

“Content moderation is still a fairly new field and no one has it entirely figured out,” the company told me. “We will continue to work hard, as we have been doing, to ensure that we are taking care of those that choose to work in the field of content review.”

Democracy

House lawmakers officially ask Facebook to put Libra cryptocurrency project on hold

Proactive regulation of a social network? In the United States? I never thought I’d see the day. Makena Kelly reports:

House Democrats are requesting Facebook halt development of its proposed cryptocurrency project Libra, as well as its digital wallet Calibra, until Congress and regulators have time to investigate the possible risks it poses to the global financial system. Rep. Maxine Waters (D-CA), the chairwoman of the House Financial Services Committee,hinted at a move like this last month shortly after the project was announced. Waters’s letter today, sent to Facebook’s CEO Mark Zuckerberg, Chief Operating Officer Sheryl Sandberg, and Calibra CEO David Marcus, formalizes that request from a few weeks ago. Aside from Waters, the letter is signed by House Finance’s subcommittee leaders.

As Facebook cracks down on fake political ads, businesses are getting caught in the crossfire

Small businesses are complaining that Facebooks’s rules for political advertising are making their lives more difficult, Megan Graham reports:

But advertisers say the changes Facebook made to its artificial intelligence system does more than just flag political ads. Many ads that mention social issues (like marketing for “eco-friendliness”) get caught in Facebook’s digital net, even if they’re not outright advocating for a cause, advertisers said. CNBC spoke to eight advertisers who have encountered various issues with flagging. Many of them characterized the system as overly broad and confusing. Flagged ads can end up in limbo as they await human review, sometimes taking days to get approval, they said. Small business advertisers said they were particularly affected by the political ad filter since ads are often flagged automatically and they often don’t often have quick access to a human reviewer to appeal to.

More than 200 companies sign brief calling on SCOTUS to recognize LGBTQ rights

Facebook, Google, Microsoft, and Pinterest are among more than 200 companies that signed a brief arguing existing laws against sex discrimination protect LGBTQ people. Ina Fried reports:

The Supreme Court is set to take up that question in a trio of cases it is expected to hear in its next session.

Google’s Jigsaw Was Supposed to Save the Internet. Behind the Scenes, It Became a Toxic Mess

Jigsaw, “Google’s internet freedom moonshot,” has a mission of using technology to make the world safer. But the office environment is miserable, current and former employees tell Lorenzo Franceschi-Bicchierai:

“The abuse has been so great that there’s now a support group for people to get out of the fucking team,” another former employee told Motherboard. “There is an organized underground network of Google current and former employees helping women leave the team, given how bad the abuse and discrimination have been.” Years ago, so many women felt mistreated that other colleagues set up a kit in the bathroom with mascara, moisturizing spray, and other items to help employees in distress hide their tears, according to two sources who used to work at Jigsaw. The kit was not discussed in the office, but it became an open secret among women on the team.

Germany fines Facebook for under-reporting complaints

“German authorities have fined Facebook 2 million euros ($2.3 million) for under-reporting complaints about illegal content on its social media platform in breach of the country’s law on internet transparency,” Thomas Escritt reports.

Under Germany’s network transparency law, social media platforms are required to report the number of complaints of illegal content they have received. The charge that Facebook did not report the full extent of the complaints it received could undermine its efforts to burnish its reputation. Under Germany’s network transparency law, social media platforms are required to report the number of complaints of illegal content they have received. The charge that Facebook did not report the full extent of the complaints it received could undermine its efforts to burnish its reputation.

Elsewhere

They turn to Facebook and YouTube to find a cure for cancer — and get sucked into a world of bogus medicine

Here’s a story in three links. One, Abby Ohlheiser had a story last week on misinformation about health claims spreading quickly on Facebook and YouTube:

I found Mari’s videos without looking for them last fall, when a search for a smoothie recipe opened up an algorithmic tunnel to videos that claimed to know the secret to curing cancer. These tunnels, forged by Google searches and Facebook recommendations, connect relatively staid health and nutrition advice to fringe theories, false claims and miracle juices. But the web of false, misleading and potentially dangerous cancer “cures” and conspiracy theories isn’t just there for those who stumble into it accidentally. More often it ensnares people who are reeling from bad news and groping for answers.

Facebook, YouTube Overrun With Bogus Cancer-Treatment Claims

Two, on Tuesday, Daniela Hernandez and Robert McMillan followed up with their own report on the subject:

A Journal investigation found misinformation about cancer treatment widely available on social-media sites. The Journal spoke to dozens of oncologists, patients, lawyers, privacy experts and company representatives, and quantified the reach of several social-media accounts that promoted scientifically unvalidated cancer therapies. As of Monday, YouTube videos viewed millions of times were among the postings advocating the use of a cell-killing, or necrotizing, ointment called black salve to treat skin cancer. Use of the ointment can inadvertently burn or kill healthy skin, and doesn’t remove cancerous growths beneath the skin, as is claimed in some videos, said David Gorski, a professor of surgery at Wayne State University School of Medicine in Detroit who edits the blog Science-Based Medicine. The wounds could also lead to infection.

Addressing Sensational Health Claims

Three, Facebook said today it had taken steps to reduce the spread of exaggerated health claims in the News Feed:

We know that people don’t like posts that are sensational or spammy, and misleading health content is particularly bad for our community. So, last month we made two ranking updates to reduce (1) posts with exaggerated or sensational health claims and (2) posts attempting to sell products or services based on health-related claims.

Facebook Is Censoring Harm Reduction Posts That Could Save Opioid Users’ Lives

Speaking of health information, Maia Szalavitz reports that Facebook’s effort to crack down on posts promoting opioids has turned up some unfortunate false positives:

As Facebook rolls out its campaign with the Partnership for Drug-Free Kids to “Stop Opioid Silence” and other initiatives to fight the overdose crisis, some stalwart advocates in the field are seeing unwelcome changes. In the past few months, accounts have been disabled, groups have disappeared, posts containing certain content—particularly related to fentanyl—have been removed, and one social media manager reports being banned for life from advertising on Facebook. In its efforts to stop opioid sales on the site, Facebook appears to be blocking people who warn users about poisonous batches of drugs or who supply materials used to test for fentanyls and other contaminants. Just as 1990s web security filters mistook breast cancer research centers for porn sites, today’s internet still seems to have trouble distinguishing between drug dealers and groups trying to reduce the death toll from the overdose crisis. VICE reviewed screenshots and emails to corroborate the claims made in this story.

Adidas’ social media campaign backfires, sending out anti-Semitic tweets

It has been seven years since Mountain Dew asked fans to name a new flavor, and 4chan manipulated the poll results to make the winning flavor “Hitler Did Nothing Wrong.” You think that would have put an end to social media promotions driven by user-generated content, but then comes Adidas offering to make images of new Arsenal jerseys from any account that tweets a certain hashtag. People started creating or repurposing Twitter accounts with fake names, and bad things ensued, Lauren Thomas reports:

The campaign was interrupted when accounts with Twitter handles like ”@GasAllJewss,” ”@MadelineMcCann” and ”@96wasnotenough” started tweeting #DareToCreate. Shirts were being created and posted with messages like “Innocent Hitler” on the back. People with those accounts were referring to tragic incidents like the Hillsborough disaster, the worst in British sporting history, when 96 people were crushed inside a stadium in 1989, and the disappearance of British child Madeleine McCann in 2007.

Dr Disrespect’s short ban divides Twitch community

Guy “Dr Disrespect” Beahm’s was suspended from Twitch for two weeks for wandering into a men’s bathroom while live streaming, setting off a debate about how long you should be suspended from Twitch for wandering into a bathroom while live streaming. (Two weeks seems … fine to me?) Julia Alexander reports:

The length of suspensions on Twitch are already controversial. Streamers have complained that Twitch isn’t transparent with why certain personalities may receive a specific suspension length, while another person who does something similar may see a different punishment altogether. Alexandra Orlando, a streamer who’s seen Twitch change over the last four years, told The Verge that although the culture has matured, Beahm’s quickly resolved suspension is a major step backward. “It makes the rules of Twitch very unclear when some offenses are taken more seriously than others,” Orlando said.

Why AI can’t fix content moderation

Today on The Vergecast, Nilay Patel interviews UCLA professor Sarah T. Roberts’ about her new book Behind the Screen: Content Moderation in the Shadows of Social Media.

Launches

Instagram’s new Stories sticker lets you ask your followers to join a new group chat

Instagram has long coveted the high engagement Snapchat gets from young people trading endless messages there. And so now here’s a sticker that invites your friends to stop looking at stories and start messaging. From Ashley Carman:

Instagram’s newest Stories sticker lets people ask their followers to join a new group chat and then gives the poster the power to select who can join. The new feature, called the chat sticker, joins a bunch of other stickers Instagram has introduced to Stories, including polls, question boxes, mentions, locations, hashtags, and countdowns, among others. The chat sticker is relatively straightforward, and Instagram is positioning it as a solution for people who want to have a big group conversation about something or for making plans.

Takes

No, Russian Twitter trolls didn’t demonstrably push Trump’s poll numbers higher

Philip Bump debunks (debumps?) a recent study that found a correlation between Russian bot tweets and Trump’s poll numbers during the 2016 election.

It will be interesting to see whether other researchers are able to replicate the analysis undertaken here. We certainly can’t definitively say that no votes were changed as a result of Russian disinformation on Twitter or that no one’s political views were influenced by it. We can say, though, that this study is worth a great deal of skepticism — especially among those who are looking for evidence that Russia’s trolling handed the election to Trump.

America needs to see Amazon’s tax returns

Russell Brandom says we can’t have a good discussion about tax law until we knows what Amazon actually pays:

Most of what we know about Amazon’s taxes comes from the company’s SEC filings, helpfully explained in this Wall Street Journal piece. Those filings list the company’s “current provision” for taxes — basically the money it expects to send to the US government this year. In the filing covering 2018, that number was negative $129 million, a net tax benefit for the company, levied against $11 billion in profits. That’s where candidates are getting the idea that Amazon is skimping on its taxes — but that number doesn’t tell you Amazon’s actual tax burden any more than your Apple receipt tells you the actual cost of your phone. The current provision could include newly settled tax disputes from previous years, or be lowered by deferments into the future. There are also a mess of thinly disclosed local and foreign tax payments, which are where Amazon gets the higher $2.6 billion number. But if you’re looking through that data for a sense of whether Amazon is paying its fair share, you’re in for a hard time. That’s a bizarre state of affairs, and it shouldn’t continue. Amazon’s tax filings are a matter of legitimate public interest, and major politicians are forced to guess about them. Amazon is a massive, world-shaping company, and it’s entirely fair to ask whether they’re paying their fair share. But whichever side of the tax debate you’re on, we can only make rational policy decisions if the public knows what companies are actually paying and why. The best way to do that is to get the information straight from the source.

And finally ...

Drew Gooden’s miniature play about trying to sell a lamp on Facebook Marketplace will resonate with anyone who has ever tried to sell anything online.

Talk to me

Send tips, comments, questions, and your best one-liners about the social media summit: casey@theverge.com.