In the face of growing political pressure and measles outbreaks in the US and abroad, YouTube recently pulled advertising from videos spreading anti-vaccine propaganda. Facebook, meanwhile, has announced that groups and pages that push misinformation about vaccines will get lower rankings and won’t be recommended to users. These overdue moves illustrate the companies’ ability to identify and police false content, and they undercut a notion widely embraced in the social media industry that Facebook, Twitter, and YouTube shouldn’t be “arbiters of the truth.”

WIRED OPINION ABOUT Paul M. Barrett is deputy director of the NYU Stern Center for Business and Human Rights and author of the center’s newest report, “Tackling Domestic Disinformation: What Social Media Companies Need to Do.”

In fact, the major social media companies already play the arbiter role, just not in a systematic way. A new report from the New York University Stern Center for Business and Human Rights urges these companies to take a more active stance in preventing disinformation from spreading online.

We’ve known that Russian operatives are using social media platforms to interfere in US elections and exacerbate political polarization. But the problem actually starts here at home. The NYU report focuses on domestically generated disinformation, noting that far more false and divisive online content is produced domestically than comes from abroad. We bring this problem on ourselves—and the platforms need to do more to address it.

Domestic disinformation comes from message boards, websites, and networks of social media accounts. It flows from both conservatives and liberals, but overall it is predominantly a right-wing phenomenon, according to research by Harvard’s Berkman Klein Center for Internet & Society and the Oxford Internet Institute.

For a graphic illustration, type “HRC video” into YouTube’s search box, and you’ll get an array of creepy videos pushing a false story about Hillary Rodham Clinton and her long-time aide Huma Abedin engaging in violent child abuse. This, of course, never happened, but it’s one of the persistent conspiracy theories contaminating major social media sites, heightening political divisiveness and coarsening American public life.

Democrats have gotten into the disinformation game, too. Left-leaning operatives created a series of fake Facebook pages to confuse Republican voters during the 2017 US Senate special election in Alabama. One goal was to siphon conservative votes to a write-in candidate as part of an effort to defeat Republican Roy Moore. Democratic consultants also deployed thousands of automated Twitter accounts to make it seem as if Russian bots were supporting Moore. Similar Democratic tactics continued elsewhere in the country during the 2018 midterms.

Several objections have been made publicly to the removal of domestic disinformation. Unlike fraudulent accounts secretly run by Russians, the misleading output of US citizens “starts to look a lot like normal politics,” Alex Stamos, Facebook’s chief security officer from 2015 through 2018, told me in connection with the NYU report. “I don’t think we want to encourage the companies to make judgments about what’s true and what’s not in politics.”

But the social media companies already make similar judgments. Facebook acknowledges that, when confronted by what it calls “false news,” it pushes the content down in users’ News Feeds, reducing its visibility by as much as 80 percent. YouTube announced in January that it would begin reducing recommendations of “content that could misinform users in harmful ways—such as videos promoting a phony miracle cure for a serious illness, claiming the earth is flat, or making blatantly false claims about historic events like 9/11.”

If they can demote provably false content, why not take the next logical step and remove it altogether? The social media companies already delete whole categories of objectionable material, including hate speech and harassment. They should add the category of provably untrue content to the removal list. The First Amendment forbids government censorship; it doesn’t preclude the companies from moderating content on their privately owned and operated sites.

Facebook, Twitter, and YouTube may protest that, given the enormous volume of posts, tweets, and videos on their sites, they can’t feasibly police falsehood. But no one would expect them to eliminate all harmful content at once. They should begin by looking for provably false material that bears on the political system. The prominence of domestically generated disinformation, and the capacity for this content to influence public policy and even swing elections, necessitates that we do more to safeguard against these threats. The health of our democracy depends on it.

WIRED Opinion publishes pieces written by outside contributors and represents a wide range of viewpoints. Read more opinions here. Submit an op-ed at opinion@wired.com

More Great WIRED Stories