Whenever the push is on for private platforms to censor their environment, someone will raise the issue of free speech, and someone else will respond that the First Amendment only applies to government. Of course, that’s correct, and every private company, be it Facebook, Youtube or Twitter, can choose to censor its platform as it sees fit.

But as Jesse Walker argues, as legally accurate as that may be, it’s not the full story.

When YouTube, Facebook or Twitter cracks down on some form of expression — conspiracy theories, radical rants, terrorist propaganda — some of the targets inevitably complain that their freedom of speech is under attack. (This feeling of victimhood may be what sent Nasim Aghdam to YouTube headquarters, gun in hand.) There is a strong retort to this: These are private platforms with a right to decide what they publish. It is no more a violation of the 1st Amendment for YouTube to muzzle a channel it finds offensive than it is for this newspaper to refuse to run a column calling for Minnesota to invade Wisconsin. But what if a private platform suppresses speech because it’s afraid the government might otherwise step in?

Sure, another “what if” argument, often dismissed here as the “space alien” possibility. Except this space alien has already landed and announced that dinner will be ready soon.

It’s happened before. The Supreme Court ruled in 1915 that free-speech protections did not apply to the movies, a decision rightly reversed in 1952. In the interim, the industry opted to stave off federal regulation by establishing a series of self-censorship systems. The most powerful of these was the Production Code, which was created in 1930 but didn’t really grow teeth until 1934, when Congress was mulling several bipartisan bills to tone down motion picture content. Hollywood got the message. Under the code: Seduction was “never the proper subject for a comedy,” plots couldn’t involve “sex relationships between the white and black races,” and the drug trade “should not be brought to the attention of audiences,” among other tight constraints.

Whether or not the government can lawfully censor, or dictate, speech and expression isn’t so much the point as whether it can effectively threaten to do so, giving rise to the chilling effect of self-censorship.

Now it’s social media’s turn. During last year’s hearings on Russian-sponsored online speech, Sen. Dianne Feinstein (D-Calif.) was overt about it. “You created these platforms, and now they’re being misused,” she told representatives from Facebook, Google and Twitter. “And you have to be the ones who do something about it — or we will.”

The idea of the government swooping in with rules, limitation, potential prosecutions, isn’t good for business. Even if they can be eventually thwarted by courts protecting their rights, they will be tied up in litigation for years, at great expense, and party to misinformation campaigns to make them look like the bad guys, perpetrating crimes upon the intellectually challenged.

Businesses certainly don’t want to be told what they can and cannot do, but they also don’t want to spend the next decade fighting the government over it. And, potentially, losing the war of who’s on the side of the angels, thus alienating their users. Without users, none of these platforms would matter a whit.

Is Feinstein threatening space alien stew?

Consider the Fight Online Sex Trafficking Act, which has now passed both houses of Congress. By making internet platforms legally liable for the things users post on them, the law encourages sites to crack down indiscriminately on all sorts of sexual discussions — including, ironically, online spaces where sex workers share information that helps them protect themselves against abuse. Risk-averse companies will have every incentive to police their users’ activities with a heavy hand, deploying algorithms that casually sweep up any posts that contain the wrong keywords. Or they’ll just eliminate potentially dicey forums altogether, as Craigslist and Reddit did as soon as the bill passed.

As bad and pointless a law FOSTA/SESTA may be for the purposes proposed, it served one exceptionally valuable purpose for the government. It proved that the government can successfully confuse the public by using scary but vapid words like “sex trafficking” and enact a law to wreak havoc with the internet.

It can be done, Zuck. What we did to Backpage and we can do to you, Zuck. Nice platform you have there. It would be a shame if anything happened to it.

And every government wants its own piece of control. California believes it should be in charge of the internet, while Germany says “hold my beef.”

Stronger pressure is coming in countries whose legal protections aren’t as robust as ours. In Germany, a law requires companies to remove hate speech from their platforms within 24 hours or face potentially crippling fines. Sites have subsequently squashed anything that could conceivably prompt such penalties, even if on closer examination the target turns out to be, say, a satirist who’s attacking rather than espousing bigotry.

Jesse neglects to note that much of what constitutes hate speech in Germany is speech critical of its government and officials. They have feelings too, you know. So even if the terminally insipid are inclined to applaud censorship like Germany, because the putative target is “hate speech,” and every SJW knows exactly what hate speech is and isn’t, even if they can’t quite put it into comprehensible words, they will be sucked into supporting it, much as their fellow emotionalists found SESTA acceptable because it used the terrifying “sex trafficking” sales pitch to confuse the useful idiots and obscure its greater significance.

If Zuck or @Jack wants to take their platform and shush the meanies, whomever they may be, because that’s what they choose to do, there is nothing illegal about it. But when censorship isn’t what they choose to do, but what they do to fend off Feinstein and her ilk, they aren’t acting out of choice but due to governmental threats and coercion.

Can the government get away with this, pressuring platforms to self-censor upon threat of intervention? They already have, with the public’s blessing.