In May 2019, Facebook expelled seven highly controversial users from its platform. Most notable among the evictees were right-wing conspiracy theorist and fear monger Alex Jones and left-wing homophobe and devout antisemite Louis Farrakhan. The excommunication of Jones and Farrakhan from their digital pulpits came amidst mounting pressure on Facebook executives to take a more active role in policing the content that appeared on their site. Facebook’s decision to impose permanent bans on its most controversial users was a departure from previous company policy that recognized its social media platform as, in the words of Mark Zuckerberg in March 2019, “The digital equivalent of a town square.”

In describing Facebook as a town square, Mr. Zuckerberg unwittingly (but perhaps presciently) touched on a line of legal precedent known as public forum doctrine. A school of First Amendment thought, public forum doctrine could — if invoked in court by members of Facebook’s ignominious outcast class — potentially have drastic implications for Mr. Zuckerberg’s editorial authority. Applied in full, public forum doctrine could limit or even curtail entirely Facebook’s ability to censor the material that is posted by its users. So what is public forum doctrine?

Named after the open markets of ancient Rome, public forum doctrine is a jurisprudential tool used to determine whether speech delivered in public settings is protected under the First Amendment. The spirit of public forum doctrine was first given life by Justice Owen J. Roberts in 1939. In the case Hague v. Committee for Industrial Organization, Roberts wrote that a group of political activists could not be prohibited from using a public park for their gatherings since:

“Wherever the title of streets and parks may rest, they have immemorially been held in trust for the use of the public and, time out of mind, have been used for purposes of assembly, communicating thoughts between citizens, and discussing public questions. Such use of the streets and public places has, from ancient times, been a part of the privileges, immunities, rights, and liberties of citizens.”

Since its conception in 1939, public forum doctrine has been applied to protect speech delivered in public theaters (Southeastern Promotions, Ltd. v. Conrad, 1975), in front of courthouses (U.S. v. Grace, 1983), and even in airports (International Society for Krishna Consciousness v. Lee, 1992). Most critically for Facebook however, public forum doctrine was famously used in 1980 to protect the speech of several high school students who were seeking signatures for a petition in a shopping mall. What makes the case distinct from other public forum cases — and what makes it relevant for Facebook — is that the mall in which the high schoolers were engaging in their activities was not public property, but was privately owned.

In a unanimous decision, the Supreme Court ruled in Pruneyard Shopping Center v. Robins that the teenage petitioners were entitled to engage in their activities since the shopping center, though private, essentially functioned as a public forum. In his concurrence, Justice Thurgood Marshall wrote, “The shopping center owners had opened their centers to the public at large, effectively replacing the State with respect to such traditional First Amendment forums as streets, sidewalks, and parks.” In summation (the case is over a 100 pages long), since the petitioners were not substantively impairing the business of the mall, nor were their views likely to be conflated with that of the owners, the court determined that their speech was protected since the mall operated, for all intents and purposes, as a traditional public forum. They came to this conclusion despite the fact that the mall was not public at all, but was a privately owned commercial entity. This precedent could have massive implications for Facebook.

With more than 2.4 billion users, it is not hyperbole to describe Facebook as the largest public forum in the history of the human race. And whereas a shopping center merely suffices as a possible public forum — even though its raison d’être has nothing to do with the exchange of ideas — Facebook’s express purpose, in its own words, is to serve as a “town square.” Furthermore, none of the concerns raised by Pruneyard Shopping Center apply to Facebook (not that the court heeded any of these concerns in the first place). Speech on Facebook would never harm profits, quite the contrary. It is precisely from speech that Facebook derives its profits — the more controversial, the better. Furthermore, no one in their right mind would conflate speech posted on Facebook with Facebook’s own opinions. Given that airports, public parks, theaters, and private shopping centers all fall under the umbrella of public forum doctrine, why shouldn’t Facebook? Is it not more amenable to broad public discussion than any of these other venues? I would argue that it is. And more importantly, the Court has already demonstrated a willingness to apply free speech doctrine to digital space.

In 2017, the Knight First Amendment Institute at Columbia University filed a lawsuit against President Donald Trump for blocking people on Twitter (yes, seriously). And, more seriously, the U.S. Court of Appeals for the Second Circuit ruled in July 2019 that President Trump’s actions were a violation of the First Amendment. The court stated that “The First Amendment does not permit a public official who utilizes a social media account for all manner of official purposes to exclude persons from an otherwise-open online dialogue because they express views with which the official disagrees.” In so ruling, the court recognized that a private account (@realDonalTrump) on a private platform (Twitter) fell under the purview of the First Amendment. Though this case does not apply to Mark Zuckerberg’s decision to block controversial users, it demonstrates an openness on the part of the court to adapting constitutional doctrine to the digital realm. The court seemed to acknowledge the potential implications of its ruling when it added a crucial caveat: “We [do not here] consider or decide whether private social media companies are bound by the First Amendment when policing their platforms.” This is an immensely important proviso: the Court of Appeals is essentially acknowledging that there may be constitutional questions at play when a private internet company elects to “police” its private platform, but it is simply choosing not to address them in this particular case.

So, in the event that a future court does decide to rule on the constitutionality of blocking users from social media, what would be the consequences?

Were Facebook to be deemed a public forum in the wake of a lawsuit initiated by someone like Louis Farrakhan, it, like other public fora, would not be able to censor the speech of its users. While reasonable “time and place” limitations on speech in public fora have been accepted by the Court (you may hold a rally in front of a school, just not while classes are in session), such restrictions must be content-neutral. This would mean that any restriction Facebook placed on speech would have to apply to all users irrespective of the view they were espousing. Such an application of public forum doctrine would effectively restore Alex Jones and Louis Farrakhan to their former pedestals (much to the chagrin of those who prefer their newsfeeds devoid of homophobia and antisemitism).

The specter of such an application of public forum doctrine raises a complicated question: is this the type of digital world we want? One in which bigots like Alex Jones and Louis Farrakhan have free rein over our news feeds? One in which their ignorance and intolerance are on full broadcast for the world to see? To this question, I respond emphatically in the affirmative.

The perception that banning condemnable speech from social media will lead to the elimination of condemnable ideas in society is a pernicious illusion. In 1927, Supreme Court Justice Louis Brandeis put it best when he wrote that when society is confronted with “falsehoods and fallacies….the remedy to be applied is more speech, not enforced silence.” While it may feel good to remove unabashed bigots from social media networks, we are most certainly doing ourselves a grave disservice in the process. The panacea for bad speech, as Brandeis acknowledged, is more speech, not less, as tiresome and futile as this may sometimes seem. Suppressing bad speech on social media doesn’t eliminate it, it merely relegates it to the darker corners of the internet that most of us don’t have access to or actively choose to avoid. And it is in these backwaters — think 4chan — that bad speech ferments.

In their book Extreme Speech and Democracy, authors Ivan Hare and James Weinstein provide three reasons why the suppression of speech may work to our disadvantage. Speaking specifically of racist language, the authors argue that “allowing and then combating hate speech discursively is the only real way to keep alive the understanding of the evil of racial hatred.” The philosopher and political economist John Stuart Mill acknowledged this same phenomenon in his seminal work, On Liberty. He held that if a given truth (such as racial equality) “is not fully, frequently, and fearlessly discussed, it will be held as a dead dogma, not a living truth.” Hare, Weinstein, and Mill each recognize that truths lose their vigor when not confronted with lies: thus, keeping hateful language alive and well on social media provides an opportunity to reinforce the veracity of our counterclaims.

Secondly, Hare and Weinstein contend that when we force bad speech underground, we obscure “the extent and location of the problem to which society must respond.” In other words, suppressing deplorable speech online doesn’t eliminate the problem, it merely moves it elsewhere. While we may relish in our blissful ignorance of bad speech, it remains in the background, ever-present. Thus, the only thing we truly eliminate is our ability to combat it.

Finally, the authors argue that suppressing the speech of hateful people (in this case of racists) merely increases their “sense of oppression and their willingness to express their views violently.” This view is supported by law professor Erica Goldberg, who in an article for the Columbia Law Review postulated that “there is reason to believe that suppressing the types of speech most damaging to women and minorities may not actually benefit members of these groups in the long term.” Given these likely externalities, is whitewashing our newsfeeds really the proper course of action? If our goal is to eventually eliminate oppressive beliefs from our society, then I sincerely believe that we should place our faith in the illuminating power of truth, not the obscuring shadow of censorship.

Most likely responding to pressure from (typically leftwing) politicians and the general populace, Mark Zuckerberg recently indicated a shift away from the “town hall” approach to social media, advocating instead for the “digital equivalent of the living room.” Whereas in a town hall we are exposed to a multiplicity of views both good in bad, in our living room, we are generally not (save for the crazy uncle). In living rooms, we rarely have to confront the racists, the sexists, the antisemites, the homophobes, the xenophobes, etc. But their absence from our living room doesn’t mean they are not there when we step outside the front door. While Mr. Zuckerberg may have the right intentions, a world in which the courts codify Facebook as a town hall may be a world in which we see a decrease, not an increase, in hateful language. If we resist the temptation to silence reprehensible ideas, we may sooner realize our ultimate goals.

Frederick Douglass once wrote that “to suppress free speech is a double wrong. It violates the rights of the hearer as well as those of the speaker.” We are the hearers, and to eliminate bad speech from social media is to deafen ourselves to the speech that festers on the underbelly of society whether we are listening or not. Though counterintuitive, if a judicial interpretation of Facebook as a public forum were to protect us from our natural tendency to silence those with whom we disagree, our digital world — and the real world that it reflects — would soon be the better for it.