Imagine, for a moment, the following series of online exchanges. This isn't a real conversation. But it's the sort of chaotic, revealing, and messy back and forth that could spread across the internet on any given day in 2019:

A nonprofit immigrant rights group creates and publishes a Facebook invite for an upcoming event: a rally calling on city cops to stop carrying out sex stings at immigrant-owned massage businesses. An LGBTQ activist shares the invite link on Twitter, adding a note about how transgender and undocumented immigrant sex workers both face especially high rates of abuse.

The activist is retweeted by a number of people. Some of them add additional comments, many of them supportive. Someone is spamming their mentions with rude memes, but the activist doesn't see it because that jerk has already been muted. When a friend points out the new replies, this is upgraded to a block.

A parenting and religion blogger using the handle @ChristianMama96 shares a link to a blog post titled "Biblical Views of Sex and Gender," which she wrote and published on her WordPress blog.

In the comments of Christian Momma's blog post—hosted with a Bluehost plan and a URL purchased via GoDaddy—someone who has been banned from Twitter for violating its misgendering policy is holding court about Caitlyn Jenner. A new commenter calls the first an asshole—a comment Christian Mama deletes because it violates her no-profanity policy.

Christian Mama's blog gets some new readers from Twitter, where the guy whose comment she deleted has been tweeting at her as part of an extended riff on the religious right. (She stays quiet and lets her fans push back, but keeps screenshots for next week's newsletter.)

Meanwhile, the sex worker rights rally that was the focus of the initial Facebook invite is well attended and gets picked up by several media outlets, some of whom embed Instagram posts from the event and video that's been uploaded to YouTube. The articles are indexed by search engines and get shared on social media.

In the end, none of the people, tools, nor tech companies mentioned get forced off the internet or hit with lawsuits and criminal charges.

Tomorrow, the event invite in such a scenario might be for a Black Lives Matter rally, a Libertarian Party fundraiser, a Mormon church group outing, an anti-war protest, or a pro-life march. The blogger and newsletter creator might be a podcaster, an Instagram model, a Facebook group moderator, a popular YouTuber, or an indie press website. The Twitter tribes might be arguing about the latest rape allegations against someone powerful, the drug war, Trump's tweets, immigration, internet regulation, or whether a hot dog counts as a sandwich.

These sorts of fictitious exchanges are, probably, no one's ideal of online speech. Like real-world conversations, they will frequently be unpredictable, uncontrollable, and frustrating. Which is to say, they are something like the way people actually communicate—online and off⁠—in a world without top-down government control of our every utterance and interaction. People talk, argue, and disagree, sometimes in florid or outrageous terms, sometimes employing flat-out insults. It can be messy and irritating, outrageous and offensive, but also illuminating and informative—the way free speech always is.

At the end of the day, a diverse array of values and causes has space to coexist, and no faction gets to dictate the terms by which the entire internet has to play. It isn't always perfectly comfortable, but online, there is room for everyone.

In the digital realm, that freedom is only possible because of a decades-old provision of the Communications Decency Act, known as Section 230. Signed into law by President Bill Clinton, when both Democrats and Republicans were mostly worried about online indecency, it has enabled the internet to flourish as a cultural and economic force.

Widely misunderstood and widely misinterpreted, often by those with political ambitions and agendas, Section 230 is, at its core, about making the internet safe for both innovation and individual free speech. It is the internet's First Amendment—possibly better. And it is increasingly threatened by the illiberal right and the regressive left, both of which are now arguing that Section 230 gives tech industry giants unfair legal protection while enabling political bias and offensive speech.

Ending or amending Section 230 wouldn't make life difficult just for Google, Facebook, Twitter, and the rest of today's biggest online platforms. Eroding the law would seriously jeopardize free speech for everyone, particularly marginalized groups whose ideas don't sit easily with the mainstream. It would almost certainly kill upstarts trying to compete with entrenched tech giants. And it would set dangerous precedents, with ripple effects that extend to economic and cultural areas in the U.S. and around the world.

As Sen. Ron Wyden (D–Ore.), one of the provision's initial sponsors, put it during a March 2018 debate on the Senate floor: "In the absence of Section 230, the internet as we know it would shrivel."

Section 230 is one of the few bulwarks of liberal free speech and radically open discourse in a political era increasingly hostile to both.

The How and Why of Section 230

The Communications Decency Act (CDA) came into being as part of the Telecommunications Act of 1996, and was intended to address "obscene, lewd, lascivious, filthy, or indecent" materials. At the time, mainstream discourse around telecom regulation was primarily concerned with rules regarding telephone companies and TV broadcasts. To the extent lawmakers considered the CDA's effect on the internet, the focus was on curbing the then-new phenomenon of online pornography.

Under the CDA, telecommunications facilities could face criminal charges if they failed to take "good faith, reasonable, effective, and appropriate actions" to stop minors from seeing indecent content. The law faced a First Amendment challenge, and the U.S. Supreme Court in late 1997 struck down many of the decency provisions entirely. But an amendment introduced by Rep. Chris Cox (R–Calif.) and Wyden, then in the House of Representatives, survived. That amendment was what we now know as Section 230.

After touting the unprecedented benefit of digital communication mediums, the statute notes in a preamble that "the Internet and other interactive computer services have flourished…with a minimum of government regulation." The legislation states that it's congressional policy "to preserve the vibrant and competitive free market" online.

The point of Section 230 was to protect the openness of online culture while also protecting kids from online smut, and protecting the web at large from being overrun by defamatory, hateful, violent, or otherwise unwanted content. Section 230 would do this by setting up a legal framework to encourage "the development of technologies which maximize user control over what information is received" and removing "disincentives for the development and utilization of blocking and filtering technologies that empower parents." Rather than tailoring the entire internet to be suitable for children or majority sensibilities, Congress would empower companies, families, and individuals to curate their own online experiences.

Section 230 stipulates, in essence, that digital services or platforms and their users are not one and the same and thus shouldn't automatically be held legally liable for each other's speech and conduct.

Which means that practically the entire suite of products we think of as the internet—search engines, social media, online publications with comments sections, Wikis, private message boards, matchmaking apps, job search sites, consumer review tools, digital marketplaces, Airbnb, cloud storage companies, podcast distributors, app stores, GIF clearinghouses, crowdsourced funding platforms, chat tools, email newsletters, online classifieds, video sharing venues, and the vast majority of what makes up our day-to-day digital experience—have benefited from the protections offered by Section 230.

Without it, they would face extraordinary legal liability. A world without Section 230 could sink all but the biggest companies, or force them to severely curtail the speech of their users in order to avoid legal trouble.

There are two parts of Section 230 that give it power. The first specifies:

No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.

The second part says voluntary and "good faith" attempts to filter out or moderate some types of user content don't leave a company on the hook for all user content. Specifically, it protects a right to "restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable."

Without that second provision, internet companies wishing to avoid trouble would be better off making no attempts to police user content, since doing so would open them up to much greater legal liability. "Indecent" and "offensive" material—the stuff social giants at least try to filter or moderate—could proliferate even more than they already do.

Beyond the First Amendment

Santa Clara University law professor Eric Goldman argues that Section 230 is "better than the First Amendment," at least where modern communication and technology are concerned.

"In theory, the First Amendment—the global bellwether protection for free speech—should partially or substantially backfill any reductions in Section 230's coverage," Goldman wrote on his blog recently. "In practice, the First Amendment does no such thing."

To be legally shielded on First Amendment grounds, offline distributors like bookstores and newsstands must be almost entirely ignorant about materials found to be illegal. A store is legally protected so long as its owners don't know about specific offensive material in a publication, even if they know they are stocking the publication. But legal blame can shift if a court determines that owners should have known something was wrong.

And all it can take to reach that should have known threshold is an alert that something might be off. Once a distributor is alerted, by anyone, that a work is problematic or that those involved with it have a problematic history, the distributor may be legally liable for the content—possibly as liable as the work's creator and the parties directly responsible for its very existence.

In an analog world—with limited content suppliers and limited means of distribution—this expectation may effectively balance free speech and preventing criminality. Because a bookstore cannot hold infinite books, we expect bookstore owners to know what they have in stock. But that expectation doesn't scale to the digital world, where users are continuously uploading content and companies receive notice about thousands (or more) of potentially problematic posts per day.

"Congress passed Section 230 because the First Amendment did not adequately protect large online platforms that processed vast amounts of third-party content," writes Jeff Kosseff in his 2019 book on Section 230, The 26 Words That Created the Internet.

As far back as 1997, courts understood that Section 230 was essential to online innovation and expression. In a world without it, computer service providers "would be faced with ceaseless choices of suppressing controversial speech [or] sustaining prohibitive liability," wrote 4th Circuit Court of Appeals Judge J. Harvie Wilkinson in his decision on Zeran v. America Online, a case involving defamatory messages posted to an AOL message board by a third party.

Some third-party content moderation calls are easy, such as when outright threats of violence or child pornography are concerned. But some—like allegations of defamation—require facts beyond what is immediately obvious. Many require context calls that aren't readily apparent across cultures or cliques.

Without Section 230, any complaint could thus be sufficient to make a company liable for user-created content. Companies would have every incentive to simply take down content or ban any users whom others flagged. Platforms are already overly deferential to companies and parties that file copyright takedown requests, since Section 230 does not protect against intellectual property law violations. Repealing Section 230 could cause them to show the same deference to people who complain about political or cultural content they don't like. The result would be a dramatically less permissive environment for online speech.

"Section 230 extends the scope of protection for 'intermediaries' more broadly than First Amendment case law alone," says tech lawyer and Reason contributing editor Mike Godwin in a new book of essays, The Splinters of our Discontent. And more protection for intermediaries means more free speech for all of us.

The Threat From the Right

The free-speech merits of the law haven't stopped lawmakers on both sides of the aisle from attacking 230, using varying justifications. Some claim it allows companies to be too careless in what they allow. Others claim it encourages "politically correct" censorship by tech-world titans.

"If they're not going to be neutral and fair, if they're going to be biased, we should repeal" Section 230, argued Sen. Ted Cruz (R–Texas) last fall. Cruz has helped popularize the idea that only through increased government control over online speech can speech really be free.

No U.S. politician has been more aggressive in pushing this idea than Sen. Josh Hawley, the 39-year-old Republican from Missouri who ousted Democratic Sen. Claire McCaskill in the 2018 midterms. McCaskill was also prone to fits over Section 230, but mostly as part of a tough-on-crime charade against "online sex trafficking." Hawley has almost single-handedly turned tech industry "arrogance" generally—and Section 230 specifically—into a leading front in the culture war.

Hawley has repeatedly suggested big social media platforms should lose Section 230 protection, claiming (incorrectly) that there's a legal distinction between online publishers and online platforms. According to Hawley—but not the plain text of Section 230 or the many court decisions considering it—platforms lose Section 230 protection if they don't practice political neutrality.

"Twitter is exempt from liability as a 'publisher' because it is allegedly 'a forum for a true diversity of political discourse,'" tweeted Hawley last November, in a call for Congress to investigate the company. "That does not appear to be accurate."

It's actually Hawley's interpretation of the law that is inaccurate. Section 230 protections are simply not conditioned on a company offering "true diversity of political discourse" (whatever that means), or anything like it.

The Communications Decency Act does say that "the Internet and other interactive computer services" can be venues for a "true diversity of political discourse," "cultural development," and "intellectual activity"—but this comes in a preamble to the actual lawmaking part of Section 230. It merely sums up congressional dreams for the web at large.

Nonetheless, Hawley's incorrect characterization of Section 230 has been echoed by numerous Republicans, including President Donald Trump and those on the fringe right. They say digital companies are biased against conservatives and Washington must take action.

In June, Hawley did just that, introducing a bill that would require tech companies to act like he's already been insisting they have to. Under his proposal, dubbed the "Ending Support for Internet Censorship Act," web services of a certain size would have to apply to the Federal Trade Commission (FTC) every two years for Section 230 protection, which would only be granted to companies that could prove perfectly neutral moderation practices.

The bill, in other words, would put the federal government in charge of determining what constitutes political neutrality—and correct modes of expression, generally; it would then grant government the power to punish companies that fail at this subjective ideal. Hawley's bill would replace the imperfect-but-market-driven content moderation practices at private companies with state-backed speech police.

In talking about tech generally, Hawley makes no concession to the conscience rights of entrepreneurs, freedom of association, or personal responsibility—values Republicans have historically harped on when it comes to private enterprise. Rather, Hawley insinuates that permissible technology and business practices should be contingent on their social benefit.

In a May speech titled "The Big Tech Threat," Hawley criticized not only social media content moderation but the entire business model, which he condemned as "hijacking users' neural circuitry to prevent rational decision making." He implied individuals have little free will when confronted with algorithmic wizardry, blaming Facebook for making "our attention spans dull" and killing social and familial bonds.

For Hawley, overhauling Section 230 is part of a larger war on "Big Tech," in which Silicon Valley has been cast in the role once occupied by Hollywood, violent games and music, or "liberal media elites." It's a battle over control of culture, and Hawley wants to use the power of the federal government to advance the conservative cause. (Hawley's office did not respond to multiple requests for comment.)

Hawley isn't the only one. Arizona Republican Rep. Paul Gosar—also parroting Hawley's falsehoods about the publisher/platform distinction—recently introduced his own version of an anti-Section 230 bill. In January, Rep. Louie Gohmert (R–Texas) proposed conditioning Section 230 protection on sites displaying user content in chronological order.

From Fox News host Tucker Carlson, who has railed that "screens are poison," to Trump boosters like American Majority CEO Ned Ryun, Hawley's anti-tech, anti-230 sentiment is gaining traction across the right. Like the social conservatives who once demonized comic books and rap music, contemporary conservatives are siding with censors—but not for the protection of any particular values. They champion regulation as leverage against companies that have made high-profile decisions to suspend or "demonetize" right-leaning content creators.

It's an impulse borne of bitterness, nurtured by carefully stoked culture war outrage, and right in line with the trendy illiberalism of the MAGA-era right. Their movement has plenty to say about what they wouldn't allow so long as Republicans are wearing the censor hat, but these conservatives are strangely silent about how the same control would be used by a Democratic administration.

The Threat From the Left

What Democrats would do with the power Republicans are seeking matters too, because Democrats have their own plans for Section 230 and just as many excuses for why it needs to be gutted: Russian influence, "hate speech," sex trafficking, online pharmacies, gun violence, "deepfake" videos, and so on.

After Facebook declined to take down deceptively edited video of Nancy Pelosi (D–Calif.), the House Speaker called Section 230 "a gift" to tech companies that "could be a question mark and in jeopardy" since they were not, in her view, "treating it with the respect that they should."

Senator and presidential candidate Kamala Harris (D–Calif.) has been pushing for the demise of Section 230 since she was California's attorney general. In 2013, she signed on to a group letter asking Congress to revise or repeal the law so they could go after Backpage for its Adult ad section.

Like Hawley, Harris and bipartisan crusaders against sex-work ads offered their own misinformation about Section 230, implying again and again that it allows websites to offer blatantly illegal goods and services—including child sex trafficking—with impunity.

Section 230 "permits Internet and tech companies like Backpage.com to profit from the sale of children online" and "gives them immunity in doing so," wrote law professor Mary Leary in The Washington Post two years ago.

"Backpage and its executives purposefully and unlawfully designed Backpage to be the world's top online brothel," Kamala Harris claimed in 2016. Xavier Becerra, California's current attorney general, said in 2017 that the law lets criminals "prey on vulnerable children and profit from sex trafficking without fully facing the consequences of their crimes."

But attorneys like Harris and Becerra should know that nothing in Section 230 protects web platforms from prosecution for federal crimes. When passing Section 230, Congress explicitly exempted federal criminal law from its purview (along with all intellectual property statutes and certain communications privacy laws). If Backpage executives were really guilty of child sex trafficking, the Department of Justice (DOJ) could have brought trafficking charges at any time.

Nor does Section 230 protect web operators who create illegal content or participate directly in illegal activities. Those operators can be prosecuted by the feds, sued in civil court, and are subject to state and local criminal laws.

Finding deep-pocketed and directly criminal websites is pretty rare, however. And state attorneys general don't get settlement and asset forfeiture money—nor their names in the headlines—when DOJ does the prosecuting. Hence, AGs have pushed hard against Section 230, asking Congress for its destruction once again in 2017.

In 2018, Congress did honor that request (at least in part) by passing a bipartisan and widely lauded bill—known by the shorthand FOSTA—that carved out a special exception to Section 230 where violations of sex trafficking laws are concerned. The Allow States and Victims to Fight Online Sex Trafficking Act also made it a federal crime to "operate an interactive computer service with the intent to promote or facilitate the prostitution of another person."

After the law passed, Craigslist shuttered its personals section almost immediately, explaining that the new law opened "websites to criminal and civil liability when third parties (users) misuse online personals unlawfully." The risk wasn't worth it. Since then, there's been ample evidence of a FOSTA chilling effect on non-criminal speech. But the law has failed to curb forced and underage prostitution, and police, social service agents, and sex workers say that FOSTA has actually made things worse.

Proponents portrayed FOSTA as a one-time edit of Section 230 that was necessary due to the special nature of underage prostitution and the existence of companies that allow online ads for legal sex work. But this year, state attorneys general asked Congress to amend Section 230 for the alleged sake of stopping "opioid sales, ID theft, deep fakes, election meddling, and foreign intrusion."

The range of reasons anti-230 crusaders now give belies the idea that this was ever really about stopping sex trafficking. And they will keep taking bites out of the apple until there's nothing left.

Harris has since suggested that we must "hold social media platforms accountable for the hate infiltrating their platforms." Sen. Mark Warner (D–Va.) said companies could see changes to Section 230 if they don't address political disinformation. Senator and 2020 presidential candidate Amy Klobuchar (D–Minn.) said it "would be a great idea" to punish social media platforms for failing to detect and remove bots.

What both the right and left attacks on the provision share is a willingness to use whatever excuses resonate—saving children, stopping bias, preventing terrorism, misogyny, and religious intolerance—to ensure more centralized control of online speech. They may couch these in partisan terms that play well with their respective bases, but their aim is essentially the same. And it's one in which only Washington, state prosecutors, and fire-chasing lawyers win.

"There's a lot of nonsense out there about what [Section 230] is all about," Wyden said in a May interview with Recode. In reality, "it's just about the most libertarian, free speech law on the books."

Is it any wonder most politicians hate it?

How 230 Makes Online Argument Possible

Let's go back to the hypothetical online arguments from the beginning. All of the characters involved benefit from Section 230 in crucial ways.

Because of Section 230, digital platforms can permit some sex work content without fearing prosecution. Without it, any information by and about sex workers, including content essential to their health and safety, could trigger takedown alarms from skittish companies worried about running afoul of local vice laws. The same goes for content concerning noncommercial sexual and romantic relationships, as well as stigmatized legal businesses like massage.

Because of Section 230, bloggers and other independent content creators can moderate their own comment sections as they see fit, even if that means deleting content that's allowable elsewhere or vice versa. Big companies like Twitter can set their own moderation rules, too, as well as provide users with customizable filtering tools. Meanwhile, blogging platforms, domain name providers, and web hosting companies needn't worry that all of this puts them at risk.

Without Section 230, all sorts of behind-the-scenes web publishing and speech dissemination tools would be in legal jeopardy. Twitter could lose the many recent lawsuits from former users challenging their respective suspensions in court. And online communities large and small could lose the right to discriminate against disruptive, prurient, or "otherwise objectionable" content.

Without Section 230, companies would thus be more likely to simply delete all user-flagged content, whether the report has merit or not, or at least immediately hide reported content as a review proceeds. It's easy to imagine massive backlogs of challenged content, much of it flagged strategically by bad actors for reasons having nothing to do with either safety or veracity. Silencing one's opponents would be easy.

Finally, because of Section 230, search engines needn't vet the content of every news article they provide links to (the same goes for news aggregation apps and sites like Reddit and Wikipedia). And embedding or sharing someone else's social content doesn't make you legally liable for it.

Without Section 230, finding and sharing information online would be much harder. And a retweet might legally be considered an endorsement, with the shared liability that entails.

Yet for all the protections it provides to readers, writers, academics, shitposters, entrepreneurs, activists, and amateur political pundits of every persuasion, Section 230 has somehow become a political pariah. While antitrust actions, data protection laws, and other regulatory schemes have gained some momentum, abrogating Section 230 is now a central front in the digital culture war.

With both major parties having been thrown into identity crises in the Trump era, it's not surprising they would somehow converge on a pseudo-populist crusade against Big Tech. The political class now wants everyone to believe that the way the U.S. has policed the internet for the past quarter-century—the way that's let so many of the world's biggest internet companies be founded and flourish here, midwifed the birth of participatory media we know today, helped sustain and make visible protest movements from Tunisia to Ferguson, and shined a light on legalized brutality around the world—has actually been lax, immoral, and dangerous. They want us to believe that America's political problems are Facebook's fault rather than their own.

Don't believe them. The future of free speech—and a lot more—may depend on preserving Section 230.

CORRECTION: This post previously misidentified the state Cox represented in Congress.