Say you're a white supremacist who happens to hate Jewish people—or black people, Muslim people, Latino people, take your pick. Today, you can communicate those views online any number of ways without setting off many tech companies' anti-hate-speech alarm bells. And that's a problem.

As the tech industry walks the narrow path between free speech and hate speech, it allows people with extremist ideologies to promote brands and beliefs on their platforms, as long as the violent rhetoric is swapped out for dog whistles and obfuscating language. All the while, social media platforms allow these groups to amass and recruit followers under the guise of peaceful protest. The deadly riots in Charlottesville, Virginia, last weekend reveal they're anything but. Now it's up to those same tech companies to adjust their approaches to online hate—as companies like GoDaddy and Discord did on Monday, by shutting down hate groups on their services—or risk enabling more offline violence in the future.

A Platform for Hate

For the most part, as long as you’re not using an online service to directly threaten anyone or disparage groups of people based on their race, ethnicity, national origin, religious affiliation, sexual orientation, sex, gender, gender identity, or serious disability or disease—policies laid out by Facebook, Twitter, and YouTube—you can get away with practically anything. You can wrap your hate in lofty language about “the heritage, identity, and future of people of European descent,” as white nationalist Richard Spencer does through his supposed think tank, the National Policy Institute. On Twitter, meanwhile, sharing a gas-chamber meme garners just a one-week suspension.

“Social media has allowed [hate groups] to spread and share their messages in ways that was never before possible,” says Jonathan Greenblatt, CEO of the Anti-Defamation League, which has tracked anti-Semitism and hate for more than a century. “They’ve moved from the margins into the mainstream.”

This weekend’s white-supremacist march in Charlottesville, which left 32-year-old Heather Heyer dead after an apparent Nazi sympathizer rammed his vehicle into a crowd, injuring 19 others, was organized out in the open on the very platforms that claim to ban hate speech of any kind. The weekend’s “Unite the Right” rally had its own Facebook page. On Reddit, members of the subreddit r/The_Donald promoted the event in the days leading up to it. And bigots like former Ku Klux Klan leader David Duke used Twitter to issue foreboding warnings that the torch rally was “only the beginning.”

Under the banner of free speech, these tech companies allowed the rhetoric to not only live on their platforms but thrive there. That’s because they operate using a simultaneously fuzzy and overly narrow set of rules around what constitutes banned behavior.

Twitter overtly allows “controversial content,” including from white-supremacist accounts. It only takes action when those tweets threaten violence, incite fear in a group of people, or use explicit slurs.

Facebook, meanwhile, says that while it removes hate speech or any praise of violent acts and hate groups, it allows “people to use Facebook to challenge ideas, institutions, and practices. And we allow groups to organize peaceful protests or rallies for or against things.”

That distinction ignores social media's well-known role as a tool of mass radicalization. Without explicitly espousing violence, these white-supremacist extremists can still recruit potential followers to a set of beliefs with deeply violent roots in Nazi Germany and the Jim Crow South. It should come as no surprise that a protest anchored in hate would erupt in violence. For tech companies to defend those online discussions as peaceful protests is disingenuous at best.