I’m going to write down three names and ask you to remember the last time you saw any of them mentioned in a conversation, whether on social media or in real life. Are you ready? Here’s the first one: Milo Yiannopoulos. It’s been years, if you’re lucky. Jacob Wohl? Weeks, at least. Finally, and this one you’ll definitely remember: Alex Jones. While we do have a winner, that’s only because Jones was in the news this past week after he appeared in a deposition video claiming that he was suffering from a form of “psychosis” when he told listeners that the Sandy Hook massacre was a hoax. But had that pathetic news not have broken, we might not have heard a squeak about him, either. That’s because all three of these bloviating attention-seekers, who harnessed the raw power of the Internet to spread hate, fake news, and conspiracy theories, have been banned from Twitter. The result has been astounding—and, dare I say, rather nice.

View more

In recent weeks, tech companies in Silicon Valley have seemed like they’ve finally started to grow a conscience—albeit a small one. Twitter, for one, said it is exploring labeling offensive tweets—including those published by the president. Then there was Pinterest, which took the brave, if insanely obvious, step of blocking search results related to vaccination on its platform, snuffing out the entire anti-vaxxer community, and, in turn, forcing Facebook (which has always complained about how hard it is to stop such volatile conversations) to do something similar. Now Facebook is finally doing something about Nazis and white nationalists, by pointing them to nonprofits that help people leave hate groups.

And yet, the leaders of these social platforms need to do more. A lot more. The Internet was designed to be an open space for free expression, where power might, for once in human history, be controlled by people. Go watch any of the early interviews and talks by Jack Dorsey and you will see him genuinely professing that Twitter was going to connect people and their elected officials in engaging ways. Along the way, however, that power was co-opted by some of the worst people in this world—not just extremists and trolls and hackers who wish us evil, but also the C.E.O.s of social platforms like Facebook and Twitter and YouTube, who don’t seem to think it’s their responsibility to police what people say on their platforms.

Executives liken their products to megaphones sold in stores: you wouldn’t ask a factory that produces megaphones to tell people what they can or can’t yell into them after they’ve been purchased. But frankly, this is a pathetic cop-out. As much as Silicon Valley luminaries profess to be libertarians, they act more like anarchists. Or perhaps just like capitalists. Mark Zuckerberg didn’t change his business model when Russians were using his platform to disrupt the 2016 presidential election, or when the United Nations accused Facebook of playing a “determining role” in ethnic cleansing in Myanmar, because his business model is absurdly profitable.

Over the years, numerous executives at Twitter have told me they don’t think their platform is to blame for all the hatred spewed online. Rather, they see Twitter as “a mirror to society.” But this, too, is pathetic. Twitter is a mirror to society in the same way a funhouse mirror distorts your image at a carnival. The problem with Twitter, and all of these platforms, is that they lack the key ingredient that keeps society from tearing itself apart: empathy. You can’t see how much someone is hurt when you only see one side of a conversation, or when you’re the one doing the hurting. Twitter and other social networks inherently do not have empathy built into their platforms. Very few technologies actually do. Add anonymity into the mix, and algorithms that amplify the most outrage-inducing content, and you’ve got a recipe for total societal disintegration.

In reality, incremental changes aren’t going to stop the hate speech and atrocities being broadcast at scale on Facebook, YouTube, and Twitter. After the Christchurch mosque massacre in New Zealand, which was live-streamed on Facebook, there was an outcry that these platforms need to fix their problems. “It is unacceptable to treat the internet as an ungoverned space,” Australian Prime Minister Scott Morrison wrote in a letter after calling for a global crackdown on social-media platforms for being unable (and often unwilling) to police themselves. New Zealand’s prime minister, Jacinda Ardern, echoed the same concerns, saying, “We cannot simply sit back and accept that these platforms just exist and that what is said on them is not the responsibility of the place where they are published. They are the publisher, not just the postman. It cannot be a case of all profit, no responsibility.”

What would seem obvious, and even empathetic, would be for these platforms to take proactive steps to fix these problems. Instead, they tragically choose not to, sometimes justifying their inaction as a defense of free speech. But let’s be blunt: this isn’t about the First Amendment. I’m sorry, but if you can’t ban someone like Alex Jones or Milo Yiannopoulos from your platform because they don’t violate the terms of service, then maybe your terms of service are an utter joke. How difficult is it to add an addendum that says: “We don’t allow people on our platform who harass the victims of a mass shooting”?

Last summer, Facebook banned several pages involving Jones and Infowars, and in February updated its policy so that it could ban nearly two dozen more. But at least some Jones-related pages are still up. So is the personal page for Yiannopoulos, who responded to the Christchurch attack by branding Islam “barbaric” and “alien.” Australia immediately banned Yiannopoulos. Why didn’t Facebook?

In many instances, it appears these decisions are being made with only profit in mind. As an infuriating Bloomberg report noted this week, executives at YouTube have ignored warnings for years about the toxic videos being shared on the video platform, afraid that if they were to police them, then engagement would go down. Can you imagine making those kinds of decisions at night and then sleeping soundly? I sure can’t. If the people who run tech companies ran our society, America would look more like a 365-day version of The Purge.

The past couple of years have made it clear that the Internet is not the utopia scientists once thought they were building. There are narcissistic, evil, self-righteous sociopaths among us—apparently lots of them—who see a camera and a screen or an empty box where you can type in text, and are willing to do anything to get countless eyeballs looking their way, irrelevant of how many people might be hurt as a result. Hate is dispersed everywhere, all the time, from the ugly comments people leave on news articles they disagree with, to the thousand times a second that people bicker on Twitter. But more and more, it’s also breaking through into the real world, where sociopaths live-stream mass shootings or other barbaric acts.

For so long, these tech platforms have either played dumb, like they don’t have the resources to solve these problems, or argued it’s not their place to step in. But the eradication of the most vile purveyors of digital slime, like Alex Jones and Milo Yiannopoulos, who have all but vanished from the public zeitgeist like virulent diseases during the Middle Ages, illustrates that tech platforms have more power than they care to admit to help the Internet, and, in turn, society, become a better place.

More Great Stories from Vanity Fair

—The Ivanka e-mail bombshell

— The key to understanding the dark heart of the Mueller report is counter-intelligence

— Why some of the biggest names in publishing are resisting Apple’s pull

— The art world’s ultimate cage match

Looking for more? Sign up for our daily Hive newsletter and never miss a story.