Although Facebook does not block false speech, it does make certain categories of false speech more difficult to find and points users toward other presumably more accurate articles about a topic.133 Moreover, sites posting false speech often violate other Facebook rules (e.g., rules against spam, hate speech, or fake accounts) and are suppressed.134 Yet Facebook product manager Tessa Lyons says “we don’t want to be and are not the arbiters of truth.”135 Yet Facebook has delegated the task of determining the factual truth of contested content to a network of third‐​party fact‐​checkers.136 While this allows Facebook to avoid the difficult and politically fraught work of distinguishing fact from fiction, Facebook is still held responsible for its selection of fact‐​checkers and the impact of their decisions. In September, the Weekly Standard, the sole conservative organization admitted to Facebook’s fact‐​checking program, deemed a ThinkProgress article, or at least its headline, false, limiting its distribution. ThinkProgress took umbrage with the decision and criticized Facebook for granting a conservative publication the ability to downrank its content.137

Facebook appears to want to let a thousand flowers bloom on its platform, yet it employs fact‐​checking gardeners that cut the false ones. The public values truth, and we hope that conspiracy theories and obvious falsehoods are bad for business. On the other hand, a tech company deciding between the competing truths offered by blue and red speakers invites political attacks against their platform and, over the long‐​term, sows doubt about the fairness of its content‐​moderation policies. Tech companies may sanction speech in circumstances where government must remain passive. Yet that empowerment has its own problems, not least of which is deciding between contending armies in an age of cultural wars.

Many nations have undertaken regulation of fake news recently.138 That such illiberal countries as Belarus, China, Cameroon, or Russia (among others) would impose government restrictions on posting or spreading misinformation may not surprise anyone.139 But European nations are more open to actively regulating speech than the United States. In November 2018, France gave authorities the power to “remove fake content spread via social media and even block the sites that publish it.”140 The European Commission has issued an initial report on disinformation that will be followed by a process of oversight and evaluation of online speech.141 For now, the commission is supporting principles and policies that would be enacted by stakeholders including the news media and online companies.142 Does such nudging of private actors constitute political pressure to suppress speech? If disinformation and fake news remain a problem, would the commission directly manage online speech or encourage national governments to take stronger measures to suppress such speech?

The United States regulates speech less than Europe does. Perhaps the European examples about regulating disinformation are not relevant for this nation.143 Yet the debate over fake news has lasted only a couple of years. Little has been said during that debate about the limits of government power over online speech; much has been said about the dangers to democracy of permitting fake news. Should future national elections turn out badly, the United States might be tempted to take a more European attitude and approach to online speech.

We should thus keep in mind that the case for public as opposed to private regulation of fake news online is weak. Fake news has no fixed meaning, and regulations would be unconstitutionally vague. The public values truth, but the search for truth in the United States must abide by the First Amendment, and the courts have held that false speech—the whole of which fake news is a part—also has the protection of the First Amendment. Were this not true, the combination of vagueness and politics in a polarized age would mean virtually anything “the other side” said would be regulated as fake news. But fake news might not be the most likely reason for suppressing online speech.

Hate speech may be defined as “offensive words, about or directed toward historically victimized groups.”144 That definition seems clear enough. But consider the case of The Bell Curve, a 1994 book by Charles Murray and Richard Herrnstein. Among other things, the authors state that the average IQ score of African Americans is one standard deviation below the average score of the population. Many also thought the book argued that nature was far more important than nurture in determining the IQ of individuals and groups, a claim that suggested social reforms would have little effect on individual and group outcomes.145 The Bell Curve offended many people; “historically victimized groups” might well have taken offense. Was The Bell Curve hate speech? If not, where should elected officials draw the line between permitted and prohibited speech?

The Supreme Court has resisted drawing such lines. Even efforts to legislate more common‐​sense bans on group invective have failed; the court has consistently invalidated laws containing terms such as “contemptuous,” “insulting,” “abusive,” and “outrageous.”146 The U.S. government lacks the power to prohibit “hate speech.”

Yet many nations regulate or prohibit speech offensive to protected groups. They limit freedom of speech to advance other values such as equal dignity. This balancing of values was first developed in Germany and has spread to other jurisdictions in the post–World War II era.147 In Germany, the law punishes “incitement of the people,” which is understood as spurring hatred of protected groups, demanding violent or arbitrary measures against them, or attacking their human dignity. Those convicted of incitement may be jailed for up to five years.148 The United Kingdom also criminalizes the expression of racial hatred.149 In two recent cases, a hate speech conviction led to incarceration.150

The United States has debated regulating hate speech for nearly a century.151 Legal scholar James Weinstein summarizes the outcome of this debate: “The United States is an outlier in the strong protection afforded some of the most noxious forms of extreme speech imaginable.”152 The Supreme Court precludes government from regulating speech because of the message of content‐​based regulation it conveys. For the court, the worst content‐​based regulation is “viewpoint discrimination,” which is restrictions based on “the specific motivating ideology or the opinion or perspective of the speaker.”153 This constraint on political power extends to highly offensive speech, which implies, Weinstein remarks, “a complete suspension of civility norms within the realm of public discourse.”154 Government may regulate some speech that is outside public discourse: all unprotected speech involves government activities or commercial advertising.155

The Supreme Court has applied this general framework to protect speech hostile to racial minorities. In their decision in R.A.V. v. City of St. Paul, the court dealt with a Minneapolis ordinance punishing speech that “one knows or has reasonable grounds to know arouses anger, alarm or resentment in others on the basis of race, color, creed, religion or gender.”156 A lower court ruled that the ordinance reached protected as well as unprotected speech and thus was unconstitutionally overbroad. The same court interpreted the ordinance to apply only to “fighting words,” which have been considered outside the protections of the First Amendment. Most of the Supreme Court went further, holding that Minneapolis had engaged in viewpoint discrimination by punishing some but not all “fighting words,” a distinction based on the ideological content of some speech.

In theory, it is possible for the courts to uphold viewpoint discrimination. Such distinctions must pass the strict scrutiny test discussed earlier. To do so, the Minneapolis regulation would have needed to be narrowly drawn to achieve a compelling government interest. The court recognized the importance of protecting minorities. Yet the government had other means to achieve that end, means that were neutral toward the content of the speech.157 Most experts assume R.A.V. v. City of St. Paul precludes government suppression of hate speech. Accordingly, hate speech on social media lies beyond government power.158

In contrast to the government, social media managers may regulate speech by users that is hostile to some groups. Facebook does so extensively. Facebook defines hate speech as “anything that directly attacks people based on what are known as their ‘protected characteristics’—race, ethnicity, national origin, religious affiliation, sexual orientation, sex, gender, gender identity, or serious disability or disease.”159 Facebook is opposed “to hate speech in all its forms”; it is not allowed on their platform as a matter of policy.160 Hate speech is forbidden on Facebook because it causes harm by creating “an environment of intimidation and exclusion and in some cases may have dangerous offline implications.”161 In June 2017, Richard Allan, vice president for public policy at Facebook, said: “Over the last two months, on average, we deleted around 66,000 posts reported as hate speech per week—that’s around 288,000 posts a month globally.”162 However, at that time, Facebook had over two billion active users.163 The number of removed hate speech posts, though very large, is relatively trivial.

Other major platforms have policies that protect people with a similar list of characteristics from hostile speech. Google has a general policy against “incitement to hatred” of a list of groups.164 YouTube, which is owned by Google, does not permit hate speech directed toward seven groups.165 This policy led to videos by Alex Jones being removed.166 Twitter also has a similar policy against hate speech.167

In sum, the First Amendment does not permit government to censor speech to prevent harms to the public apart from known exceptions such as direct incitement to violence. The government may not censor fake news or hate speech. Private regulators are doing what government officials may not do: regulating and suppressing speech believed to cause harm to citizens generally and protected groups specifically. Private action thus weakens the case for moving the United States toward a more European approach to fake news and hate speech.

But such private action presents a mixed picture for supporters of robust protections for speech. The platforms offer less protection for speech than the government does. Social media managers discriminate among speakers according to the content of their speech and the viewpoints expressed. Tech companies have in part applied to speech the proportionality test long‐​recognized in Europe and rejected in this country. Private content governance of social media poses a quandary, particularly for libertarians and anyone who recognizes that private property implies a strong right for social media managers to control what happens on their internet platforms without government interference. It seems likely that social media managers choose to limit speech in the short term to fulfill their larger goal of building a business for the long term. They may believe that excluding extreme speech is required to sustain and increase the number of users on their platform.

Moreover, we should ask whether these efforts regarding hate speech (along with private suppression of Russian speech, terrorist incitement, or fake news) is truly a private decision and not state action. If Facebook or other platforms remove content to avoid government regulation, is such suppression state action or a hybrid of private choice determined by public threats and offers?

Conclusion

We began with Cloudflare CEO Matthew Prince’s concern about legitimate governance of speech on the internet. Prince’s desire to bring government into online speech controversies is understandable but misplaced. American history and political culture assign priority to the private in governing speech online and particularly on social media. The arguments advanced for a greater scope of government power do not stand up. Granting such power would gravely threaten free speech and the independence of the private sector. We have seen that these tech companies are grappling with many of the problems cited by those calling for public action. The companies are technically sophisticated and thus far more capable of dealing with these issues. Of course, the efforts of the companies may warrant scrutiny and criticisms, now and in the future. But at the moment, a reasonable person can see promise in their efforts, particularly in contrast to the likely dangers posed by government regulation.

Government officials may attempt directly or obliquely to compel tech companies to suppress disfavored speech. The victims of such public‐​private censorship would have little recourse apart from political struggle. The tech companies, which rank among America’s most innovative and valuable firms, would then be drawn into the swamp of a polarized and polarizing politics. To avoid politicizing tech, it is vital that private content moderators be able to ignore explicit or implicit threats to their independence from government officials.

It is Facebook, Medium, and Pinterest—not Congress or President Trump—that have a presumption of legitimacy to remove the speech of StormFront and similar websites. These firms need to nurture their legitimacy to moderate content. The companies may have to fend off government officials eager to suppress speech in the name of the “public good.” The leaders of these businesses may regret being called to meet this challenge with all its political and social dangers and complexities. But this task cannot be avoided. No one else can or should do the job.

Notes

1. Remarks at presentation at the Cato Institute, November 28, 2017. See also Matthew Prince, “Why We Terminated Daily Stormer,” Cloudflare (blog), August 16, 2017. “Law enforcement, legislators, and courts have the political legitimacy and predictability to make decisions on what content should be restricted. Companies should not.” Thanks to Alissa Starzak for the reference.

2. Social media firms are often obligated to follow laws in nations where they operate. In the future, such laws may create enforceable transnational obligations. For now, however, the debate concerns national audiences and policies. See David R. Johnson and David G. Post, “Law and Borders: The Rise of Law in Cyberspace,” Stanford Law Review 48, no. 5 (1996): 1367.

3. Tom Standage, Writing on the Wall: Social Media—The First 2,000 Years (New York: Bloomsbury, 2013), p. 8. See also Sheryl Sandberg’s definition in testimony before the Senate Intelligence Committee: “Social media enables you to share what you want to share, when you want to share it, without asking permission from anyone, and that’s how we meet our mission, which is giving people a voice.” Sheryl Sandberg, Facebook chief operating officer, and Jack Dorsey, Twitter chief executive officer, Foreign Influence Operations’ Use of Social Media Platforms, Testimony before the Senate Committee on Intelligence, 115th Cong., 2nd sess., September 5, 2018.

4. Standage, p. 13.

5. J. A. Obar and S. Wildman, “Social Media Definition and the Governance Challenge: An Introduction to the Special Issue,” Telecommunications Policy 39, no. 9 (2015): 746.

6. “The backbone of the social media service is the user profile. … The type of identifying information requested, as well as the options for identifying oneself vary considerably from service to service, but often include the option of creating a username, providing contact information and uploading a picture. The reason the profile serves this backbone function is to enable social network connections between user accounts. Without identifying information, finding and connecting to others would be a challenge.” Obar and Wildman, 747.

7. Commercial speech plays a small part in these policy matters. Advertising, as will be shown, does matter. However, the speech carried by ads is political and not commercial. See the discussion of Russian “meddling” in the 2016 U.S. election.

8.Red Lion Broadcasting Co. v. Federal Communications Commission, 395 U.S. 367 (1969).

9. See the website for the Heritage Guide to the Constitution for a concise discussion. Eugene Volokh, “Freedom of Speech and of the Press,” Heritage Guide to the Constitution (website).

10.Sorrell v. IMS Health Inc., 564 U.S. 552 (2011); Martin H. Redish, “Commercial Speech and the Values of Free Expression,” Cato Institute Policy Analysis no. 813, June 19, 2017.

11.United States v. Carolene Products Co., 304 U.S. 144 (1938) 152: “Regulatory legislation affecting ordinary commercial transactions is not to be pronounced unconstitutional unless, in the light of the facts made known or generally assumed, it is of such a character as to preclude the assumption that it rests upon some rational basis within the knowledge and experience of the legislators.”

12. Other reasons might counsel not regulating speech on social media. The costs of such regulation might outweigh the benefits to society. Here I examine only rights that might preclude or weigh heavily against regulation.

13. See the extended discussion in Daphne Keller, “Who Do You Sue? State and Platform Hybrid Power over Online Speech,” Aegis Series Paper no. 1902, Hoover Institution, Stanford, 2019, pp. 17–21.

14.Miami Herald Pub. Co. v. Tornillo, 418 U.S. 241 (1974), 258.

15. Monica Bickert, head of global policy management at Facebook, writes: “First, [social media] generally do not create or choose the content shared on their platform; instead, they provide the virtual space for others to speak.” Lee Bollinger and Geoffrey Stone, eds., “Defining the Boundaries of Free Speech on Social Media,” The Free Speech Century (New York: Oxford University Press, 2018), p. 254. The important word here is “generally.” Relatively speaking, very little content is removed.

16. Eugene Volokh and Donald M. Falk, “Google: First Amendment Protection for Search Engine Search Results,” Journal of Law, Economics, and Policy 8, no. 8.4 (2012): 886–88.

17. See, for example, their CEO’s discussion of content moderation, which takes a stand “against polarization and extremism” while affirming other commitments. Mark Zuckerberg, “A Blueprint for Content Governance and Enforcement,” Facebook, November 15, 2018.

18. Protection for Private Blocking and Screening of Offensive Material, 47 U.S.C. § 230.

19. Batzel v. Smith, 333 F.3d 1018 (Court of Appeals, 9th Circuit 2003), 8443; and Electronic Frontier Foundation (website), “CDA 230: Legislative History.”

20.Batzel v. Smith, 8445.

21.Batzel v. Smith, 8445.

22. Bickert, “Defining,” 254–55.

23.Perry Education Association v. Perry Local Educators’ Association, 460 U.S. 37 (1983).

24. The courts are first among equals in the United States on these matters. “The First Amendment, as interpreted by the courts, provides an anchor for freedom of the press and thus accentuates the difference between publishing and the electronic domain. Because of the unique power of the American courts, the issue in the United States unfolds largely in judicial decisions.” Ithiel de Sola Pool, Technologies of Freedom (Cambridge: Harvard University Press, 1983), p. 8.

25.Marsh v. Alabama, 326 U.S. 501 (1946), 506, 509.

26.Food Employees v. Logan Valley Plaza Inc., 391 U.S. 308 (1968), 324.

27.Lloyd Corp. v. Tanner, 407 U.S. 551 (1972).

28.Hudgens v. NLRB, 424 U.S. 507 (1976), 513.

29.Pruneyard Shopping Ctr. v. Robins, 447 U.S. 74 (1980).

30.Pruneyard Shopping Ctr. v. Robins, 81–82.

31. Dahlia Lithwick, “Why Can Shopping Malls Limit Free Speech?,” Slate, March 10, 2003.

32. Communications Act of 1934, 47 U.S.C. § 151, Pub. L. No. 73–416.

33. Thomas Winslow Hazlett, The Political Spectrum: The Tumultuous Liberation of Wireless Technology, from Herbert Hoover to the Smartphone (New Haven: Yale University Press, 2017), p. 146.

34. The monopoly argument is not limited to one part of the political spectrum. The head of CNN called for investigation of “these monopolies that are Google and Facebook.” See Stewart Clarke, “CNN Boss Jeff Zucker Calls on Regulators to Probe Google, Facebook,” Variety, February 26, 2018; Tim Wu, The Curse of Bigness: Antitrust in the New Gilded Age (New Yorker: Columbia Global Reports, 2018).

35. Trump’s campaign manager Brad Parscale has written, “Google claims to value free expression and a free and open internet, but there is overwhelming evidence that the Big Tech giant wants the internet to be free and open only to political and social ideas of which it approves.” Brad Parscale, “Trump Is Right: More than Facebook and Twitter, Google Threatens Democracy, Online Freedom,” USA Today, September 10, 2018. Parscale’s article offers several examples of putative bias against conservatives. During the Republican primaries in 2016, Facebook employees admitted they “routinely suppressed news stories of interest to conservative readers from the social network’s influential ‘trending’ news section.” Facebook denied the charge. See Michael Nunez, “Former Facebook Workers: We Routinely Suppressed Conservative News,” Gizmodo, May 9, 2016; see also Peter van Buren, “Extend the First Amendment to Twitter and Google Now,” The American Conservative, November 7, 2017. This view appears to be spreading on the right; see Michael M. Grynbaum and John Herrman, “New Foils for the Right: Google and Facebook,” New York Times, March 6, 2018.

36. Laura Stevens, Tripp Mickle, and Jack Nicas, “Tech Giants Power to New Heights,” Wall Street Journal, February 2, 2018.

37. David S. Evans and Richard Schmalensee, “Debunking the ‘Network Effects’ Bogeyman: Policymakers Need to March to the Evidence, Not to Slogans,” Regulation 40 (Winter 2017–18): 36.

38. Evans and Schmalensee, “Debunking,” 39.

39. “There are some important industries where ‘winner takes most’ may apply. But even there, victory is likely to be more transient than economists and pundits once thought. In social networking, Friendster lost to MySpace, which lost to Facebook, and, while Facebook seems entrenched, there are many other social networks nipping at its heels.” David S. Evans, Matchmakers: The New Economics of Multisided Platforms (Cambridge: Harvard Business Review Press, 2016), Kindle edition.

40. Facebook “was built on the power of network effects: You joined because everyone else was joining. But network effects can be just as powerful in driving people off a platform. Zuckerberg understands this viscerally.” Nicholas Thompson and Fred Vogelstein, “Inside the Two Years That Shook Facebook—and the World,” Wired, February 12, 2018.

41. Amy Mitchell, Elisa Shearer, Jeffrey Gottfried, and Michael Barthel, “How Americans Get Their News,” Pew Research Center, July 7, 2016.

42. Daniel Trotta, “Shunned by Corporations, U.S. Gun Entrepreneurs Launch Start‐​Ups,” Reuters, May 6, 2018.

43. Hazlett, The Political Spectrum, pp. 91–92.

44. Bruce M. Owen, Jack H. Beebe, and Willard G. Manning, Television Economics (Lexington, MA: Lexington, 1974), p. 12, quoted in Hazlett, The Political Spectrum, p. 92.

45. Hazlett, The Political Spectrum, pp. 20–21.

46. Hazlett, The Political Spectrum, pp. 14–16.

47. Hazlett, The Political Spectrum, pp. 143–45.

48. For the public failure see John Samples, “Broadcast Localism and the Lessons of the Fairness Doctrine,” Cato Institute Policy Analysis no. 639, May 27, 2009, pp. 7–8; for Hazlett, see The Political Spectrum, pp. 149–52.

49. “A surprising number of people it seems dislike being exposed to the processes endemic to democratic government. People profess a devotion to democracy in the abstract but have little or no appreciation for what a practicing democracy invariably brings with it.… People do not wish to see uncertainty, conflicting options, long debate, competing interests, confusion, bargaining, and compromised, imperfect solutions.” John R. Hibbing and Elizabeth Theiss‐​Morse, Congress as Public Enemy: Public Attitudes Toward American Political Institutions (Cambridge: Cambridge University Press, 1995), p. 147.

50. Cass R. Sunstein, #Republic: Divided Democracy in the Age of Social Media (Princeton: Princeton University Press, 2018), p. 157.

51. One way to deal with this conflict between “our” aspirations and the revealed preferences of individuals has been to insist that revealed preferences actually contravene the true interests of individuals, whereas our aspirations represent a truth to be honored by government action. Equating revealed preferences with false consciousness is one version of this argument. As we shall see, Sunstein does not go far down the path toward imposing our aspirations.

52. Sunstein, #Republic, p. 260.

53. Sunstein, #Republic, p. 43. He appeals to the spirit if not the letter of constitutional doctrine: “On the speakers’ side, the public forum doctrine thus creates a right of general access to heterogeneous citizens. On the listeners’ side, the public forum creates not exactly a right but rather an opportunity, if perhaps an unwelcome one: shared exposure to diverse speakers with diverse views and complaints.” Sunstein, #Republic, p. 38.

54. Sunstein, #Republic, p. 49.

55. Sunstein, #Republic, p. 71, 259. Former president Barack Obama has said that “essentially we now have entirely different realities that are being created with not just different opinions, but now different facts. And this isn’t just by the way Russian inspired bots and fake news. This is Fox News vs. The New York Times editorial page. If you look at these different sources of information, they do not describe the same thing. In some cases, they don’t even talk about the same thing. And so it is very difficult to figure out how democracy works over the long term in those circumstances.” He added that government should put “basic rules of the road in place that create level playing fields.” Robby Soave, “5 Things Barack Obama Said in His Weirdly Off‐​the‐​Record MIT Speech,” Hit and Run (blog), Reason, February 26, 2018.

56. Sunstein, #Republic, p. 255.

57. Sunstein, #Republic, p. 67.

58. Cristian Vaccari, “How Prevalent Are Filter Bubbles and Echo Chambers on Social Media? Not as Much as Conventional Wisdom Has It,” Cristian Vaccari (blog), February 13, 2018.

59. See the studies cited in Michael A. Beam, Myiah J. Hutchens, and Jay D. Hmielowski, “Facebook News and (de)Polarization: Reinforcing Spirals in the 2016 US Election,” Information, Communication and Society 21, no. 7 (July 3, 2018): 4.

60. Solomon Messing and Sean J. Westwood, “Selective Exposure in the Age of Social Media: Endorsements Trump Partisan Source Affiliation When Selecting News Online,” Communication Research 41, no. 8 (December 2014): 1042–63.

61. Pablo Barberá, “How Social Media Reduces Mass Political Polarization. Evidence from Germany, Spain, and the U.S.,” working paper, New York University, 2014. Paper prepared for the 2015 APSA Conference.

62. Frederik J. Zuiderveen Borgesius, Damian Trilling, Judith Möller, Balázs Bodó, Claes H. de Vreese, and Natali Helberger, “Should We Worry about Filter Bubbles?,” Internet Policy Review 5, no. 1 (2016): 10.

63. Matthew Barnidge, “Exposure to Political Disagreement in Social Media Versus Face‐​to‐​Face and Anonymous Online Settings,” Political Communication 34, no. 2 (2016): 302–21.

64. Levi Boxell, Matthew Gentzkow, and Jesse M. Shapiro, “The Internet, Political Polarization, and the 2016 Election,” Cato Institute Research Brief in Economic Policy no. 88, November 1, 2017.

65. Beam et al., “Facebook News and (de)Polarization,” 1.

66. Cristian Vaccari, “How Prevalent Are Filter Bubbles and Echo Chambers on Social Media? Not as Much as Conventional Wisdom Has It,” Cristian Vaccari (blog), February 13, 2018.

67. Elizabeth Dubois and Grant Blank, “The Echo Chamber Is Overstated: The Moderating Effect of Political Interest and Diverse Media,” Information, Communication and Society 21, no. 5 (2018): 729–45.

68. Richard Fletcher and Rasmus Kleis Nielsen, “Are People Incidentally Exposed to News on Social Media? A Comparative Analysis,” New Media and Society 20, no. 7 (July 2018): 2450–68.

69. Sunstein, #Republic, p. 262.

70. Sunstein, #Republic, p. 260; see also p. 158.

71. Owen Fiss, “Free Speech and Social Structure,” Iowa Law Review 71 (1986): 1405–25.

72. Owen Fiss, The Irony of Free Speech (Cambridge, MA: Harvard University Press, 1996).

73. Sunstein, #Republic, p. 260.

74. Sunstein has proposed “nudging” people to make better decisions by altering their choice architecture; see Richard H. Thaler and Cass R. Sunstein, Nudge: Improving Decisions about Health, Wealth, and Happiness (New Haven: Yale University Press, 2008).

75. Sunstein, #Republic, p. 215.

76. Sunstein, #Republic, p. 226. “I certainly do not suggest or believe that government should require anything of this kind (i.e., mandatory linking to opposing views). Some constitutional questions are hard, but this one is easy: any such requirements would violate the First Amendment.” Sunstein, p. 231.

77. Sunstein, #Republic, pp. 84–85, 221.

78. Amos A. Jordan et al., American National Security (Baltimore: Johns Hopkins University Press, 2011), pp. 3–4.

79. J. M. Berger, “The Difference between a Killer and a Terrorist,” The Atlantic, April 26, 2018.

80. Kathleen Ann Ruane, “The Advocacy of Terrorism on the Internet: Freedom of Speech Issues and the Material Support Statutes,” Congressional Research Service, September 8, 2016, p. 1.

81.Brandenburg v. Ohio, summarized in Ruane, “The Advocacy of Terrorism on the Internet,” p. 5.

82.Brandenburg v. Ohio, 395 U.S. (1969): 448; and Ruane, “The Advocacy of Terrorism on the Internet,” p. 4.

83. Eric Posner, “ISIS Gives Us No Choice but to Consider Limits on Speech,” Slate, December 15, 2015.

84. David G. Post, “Protecting the First Amendment in the Internet Age,” Washington Post, December 21, 2015.

85. See Pennie v. Twitter Inc., 2017 WL 5992143 (N.D. Cal. Dec. 4, 2017); Force v. Facebook Inc., 2018 WL 472807 (E.D.N.Y. Jan. 18, 2018); Crosby v. Twitter Inc., 303 F. Supp. 3d 564 (E.D. Mich. April 2, 2018); Gonzalez v. Google Inc., 2018 WL 3872781 (N.D. Cal. Aug. 15, 2018); Cain v. Twitter Inc., 2018 WL 4657275 (N.D. Cal. Sept. 24, 2018); Cohen v. Facebook Inc., 2017 WL 2192621 (E.D.N.Y. May 18, 2017); and Fields v. Twitter Inc., 2018 WL 626800 (9th Cir. Jan. 31, 2018).

86. Fields v. Twitter Inc., 2018 WL 626800 (9th Cir. Jan. 31, 2018).

87. Zann Isacson, “Combating Terrorism Online: Possible Actors and Their Roles,” Lawfare, September 2, 2018; Matt Egan, “Does Twitter Have a Terrorism Problem?,” Fox Business (website), October 9, 2013.

88. “Violent or Graphic Content Policies,” YouTube Help.

89. “Dangerous Individuals and Organizations,” Facebook Community Standards.

90. “The Twitter Rules,” Twitter Help Center.

91. Mark Zuckerberg, “Preparing for Elections,” Facebook, September 13, 2018.

92. Ithiel de Sola Pool believed national security issues would become more important in the electronics age: “Censorship is often imposed for reasons of national security, cultural protection, and trade advantage. These issues, which have not been central in past First Amendment controversies, are likely to be of growing importance in the electronic era.” Ithiel de Sola Pool, Technologies of Freedom (Cambridge, MA: Harvard University Press, 1983), p. 9.

93.Lamont v. Postmaster General, 381 U.S. 301 (1965).

94.Lamont v. Postmaster General, 307.

95. 11 CFR 110.20(f).

96. FEC​.gov, FEC Record: Outreach, “Foreign Nationals.”

97. FEC​.gov, “Foreign Nationals,” citing Bluman v. FEC, 800 F. Supp. 2d 281, 290 (D.D.C. 2011), affirmed, 132 Supreme Court 1087 (2012).

98. “Foreign Nationals Brochure,” Federal Election Commission, July 2003.

99. 22 U.S.C. § 611. The Mueller indictment notes that this disclosure informs “the people of the United States … of the source of information and the identity of persons attempting to influence U.S. public opinion, policy, and law.” This information in turn allows Americans to “evaluate the statements and activities of such persons in light of their function as foreign agents.” Indictment at 11, U.S. v. Viktor Borisovich Netyksho et al., Case 1:18-cr-00032-DLF (D.D.C. filed Feb. 16, 2018).

100. “General FARA Frequently Asked Questions,” Department of Justice, August 21, 2017.

101. Lobbying on behalf of commercial interests makes up a significant part of foreign lobbying of the U.S. government, see Holly Brasher, Vital Statistics on Interest Groups and Lobbying (Thousand Oaks, CA: SAGE Publications, 2014), pp. 136–44.

102. Jack Stubbs and Ginger Gibson, “Russia’s RT America Registers as ‘Foreign Agent’ in U.S.,” Reuters, November 13, 2017; James Kirchik, “Why Russia’s RT Should Register as an Agent of a Foreign Government,” Brookings (blog), September 22, 2017.

103. RT claims to have eight million weekly U.S. viewers, though the real numbers are likely far smaller. See Amanda Erickson, “If Russia Today Is Moscow’s Propaganda Arm, It’s Not Very Good at Its Job,” Washington Post, January 12, 2017.

104. The Russian ads would have still been illegal even if the funder had been disclosed.

105. Scott Shane, “How Unwitting Americans Encountered Russian Operatives Online,” New York Times, February 19, 2018.

106. Brendan Nyhan, “Fake News and Bots May Be Worrisome, but Their Political Power Is Overblown,” New York Times, February 19, 2018.

107. Byron York, “A Non‐​alarmist Reading of the Mueller Russia Indictment,” Washington Examiner, February 18, 2018.

108. Ross Douthat, “The Trolling of the American Mind,” New York Times, February 21, 2018.

109. Mark Zuckerberg, “Preparing for Elections,” Facebook, September 13, 2018. Subsequent quotations from Zuckerberg will refer to this source.

110. “What Super Pacs, Non‐​profits, and Other Groups Spending Outside Money Must Disclose about the Source and Use of Their Funds,” OpenSecrets.

111. Speaking during a 2017 congressional investigation of Russian efforts during the 2016 election, Sen. Diane Feinstein (D-CA) said to tech leaders: “You’ve created these platforms, and now they are being misused, and you have to be the ones to do something about it. Or we will.” Byron Tau, Georgia Wells, and Deepa Seetharaman, “Lawmakers Warn Tech Executives More Regulation May Be Coming for Social Media,” Wall Street Journal, November 1, 2017.

112. Zuckerberg, “Preparing for Elections.”

113. Stephen A. Siegel, “The Origin of the Compelling State Interest Test and Strict Scrutiny,” American Journal of Legal History 48, no. 4 (2006): 355–407.

114. We treat here “misinformation” or “disinformation” as a subset of fake news and more generally as a kind of false speech. Misinformation may be speech that is intentionally false. For First Amendment purposes, it would be difficult to distinguish unintentionally false speech from intentionally false speech. If that distinction cannot be made, the analysis that applies to false speech also includes misinformation, keeping in mind the focus here will be on public values.

115. Nadine Strossen, HATE: Why We Should Resist It with Free Speech, Not Censorship (New York: Oxford University Press, 2018), pp. 69–70.

116. Brooke Borel, “Fact‐​Checking Won’t Save Us from Fake News,” FiveThirtyEight, January 4, 2017.

117. Bertin Martens, Luis Aguiar, Estrella Gomez‐​Herrera, and Frank Mueller‐​Langer, “The Digital Transformation of News Media and the Rise of Disinformation and Fake News—An Economic Perspective,” Digital Economy Working Paper 2018-02, JRC Technical Reports, pp. 8–11.

118. See Alvarez, p. 4, quoting Ashcroft, “The First Amendment means that government has no power to restrict expression because of its message, its ideas, its subject matter, or its content.” United States v. Alvarez, 567 U.S. 709 (2012); and Ashcroft v. American Civil Liberties Union, 535 U.S. 564, 573 (2002).

119.Alvarez, p. 4.

120.Alvarez, p. 4.

121.Alvarez, p. 7.

122.Alvarez, p. 11.

123. See New York Times Co. v. Sullivan, 376 U.S. 254 (1964), and Alvarez.

124.United States v. Alvarez, 567 U.S. 709 (2012).

125.New York Times Co. v. Sullivan, 376 U.S. 254 (1964).

126. Protection for Private Blocking and Screening of Offensive Material, 47 U.S. Code § 230.

127. David French, “A Better Way to Ban Alex Jones,” New York Times, August 7, 2018.

128. Michael Barthel, Amy Mitchell, and Jesse Holcomb, “Many Americans Believe Fake News Is Sowing Confusion,” Pew Research Center, December 15, 2016.

129. Bertin Martens et al., “The Digital Transformation of News Media and the Rise of Disinformation and Fake News—An Economic Perspective,” Digital Economy Working Paper 2018-02, JRC Technical Reports, pp. 40–47.

130. George A. Akerlof, “The Market for ‘Lemons’: Quality Uncertainty and the Market Mechanism,” Quarterly Journal of Economics 84, no. 3 (1970): 488–500.

131. Martens et al., “Digital Transformation,” pp. 51–52.

132. Richard Allan, “Hard Questions: Where Do We Draw the Line on Free Expression?,” Facebook Newsroom, August 9, 2018.

133. Allan, “Where Do We Draw,” which states, “And rather than blocking content for being untrue, we demote posts in the News Feed when rated false by fact‐​checkers and also point people to accurate articles on the same subject.”

134. Tessa Lyons, Facebook product manager, “Hard Questions: What’s Facebook’s Strategy for Stopping False News?,” Facebook Newsroom, May 23, 2018.

135. Laura Hazard Owen, “Facebook Is Paying Its Fact‐​Checking Partners Now (and Giving Them a Lot More Work to Do),” Nieman Lab.

136. “How Is Facebook Addressing False News through Third‐​Party Fact‐​Checkers?,” Facebook Help Center, Facebook.

137. Casey Newton, “A Partisan War over Fact‐​Checking Is Putting Pressure on Facebook,” The Verge (website), September 12, 2018.

138. See the updated database about fake news regulation throughout the world prepared by the Poynter Institute. Daniel Funke, “A Guide to Anti‐​misinformation Actions around the World,” Poynter Institute, Poyn​ter​.org.

139. The reputations of China, Russia, and Belarus are well known in this regard. Cameroon is less infamous, but its problems are summarized by a recent headline in The Guardian, “Cameroon Arrests Opposition Leader Who Claims He Won 2018 Election.”

140. Funke, “Guide,” Poyn​ter​.org. See the entry for France.

141. European Union (website), Publications Office of the EU, A Multidimensional Approach to Disinformation: Report of the Independent High Level Group on Fake News and Online Disinformation, March 30, 2018. For the process going forward, see p. 33.

142. European Commission, Multidimensional, pp. 35–38.

143. According to the Poynter Institute, neither Congress nor the states has tried to suppress fake news. The California legislature did pass a bill setting up an advisory commission “to monitor information posted and spread on social media.” The governor vetoed the bill. See “Governor Brown Vetoes Fake News Advisory Group Bill, Calls It ‘Not Necessary’,” CBS Sacramento (website), September 27, 2018.

144. Samuel Walker, Hate Speech: The History of an American Controversy (Lincoln: University of Nebraska Press), p. 1.

145. The authors appear somewhat skeptical about the effects of nature vs. nurture on IQ. See Richard J. Herrnstein and Charles Murray, The Bell Curve: Intelligence and Class Structure in American Life (New York: Free Press, 1996), p. 131. For the average IQ claim, see pp. 276–7.

146. Strossen, HATE, p. 71.

147. Dieter Grimm, “Freedom of Speech in a Globalized World,” in Extreme Speech and Democracy, ed. Ivan Hare and James Weinstein (New York: Oxford University Press, 2009), p. 13.

148. See Strafgesetzbuch (StGB), § 130 Volksverhetzung, 1–2.

149. Public Order Act, 1986, Part III.

150. Britain First is a “nationalistic, authoritarian, … nativist, ethnocentric and xenophobic” group hostile to Muslim immigrants in the United Kingdom. They are active online with significant consequences for their leaders if not for British elections. The leading and perhaps only scholarly study of the group is Chris Allen, “Britain First: The ‘Frontline Resistance’ to the Islamification of Britain,” Political Quarterly 85, no. 3 (July–September 2014): 354–61. See also the report by the organization Hope not Hate, “Britain First: Army of the Right,” November 2017. The leaders of Britain First, Paul Golding and Jayda Fransen, were incarcerated for distributing leaflets and posting online videos that reflected their extreme antipathy to Muslims. Fransen received a 36‐​week sentence and Golding an 18‐​week sentence. Kevin Rawlinson, “Britain First Leaders Jailed over Anti‐​Muslim Hate Crimes,” The Guardian, March 7, 2018.

151. For the origins of the debate, see Walker, Hate Speech, pp. 17–37.

152. James Weinstein, “An Overview of American Free Speech Doctrine and Its Application to Extreme Speech,” in Extreme Speech and Democracy, eds. Ivan Hare and James Weinstein (New York: Oxford University Press, 2009), p. 81.

153. Weinstein, “Overview,” pp. 81–82, quoting Rosenberger v. Rector and Visitors of University of Virginia, 515 U.S. 819, 829 (1995).

154. Weinstein, “Overview,” p. 82.

155. Weinstein gives examples of such settings: government workplace, state school classroom, and the courtroom. See “Overview,” p. 83.

156. Weinstein, p. 85, n. 34.

157.R.A.V. v. City of St. Paul, 505 U.S. 377 (1992). In the past, the Supreme Court upheld a group libel law, Beauharnais v. Illinois 343 U.S. 250 (1952). It is generally assumed that while the court has not formally overruled the precedent, it would not validate a group libel law today. See Weinstein, “Overview,” p. 88, and n. 52.

158. This assumes hate speech does not fall into a category of speech recognized as unprotected by the First Amendment (e.g., a “true threat”).

159. Richard Allan, “Hard Questions: Who Should Decide What Is Hate Speech in an Online Global Community?,” Facebook Newsroom, June 27, 2017. Allan is currently Facebook’s vice president for policy.

160. Allan, “Who Should Decide?”

161. Richard Allan, “Hard Questions: Where Do We Draw the Line on Free Expression?,” Facebook Newsroom, August 9, 2018.

162. Allan, “Who Should Decide?”

163. Maddy Osman, “28 Powerful Facebook Stats Your Brand Can’t Ignore in 2018,” Sprout Social (website).

164. “Prohibited Content,” AdSense Help, Google Support; “Hate Speech | Inappropriate Content | Restricted Content,” Developer Policy Center, Google Play.

165. “Violent or Graphic Content Policies,” YouTube Help, Google Support.

166. Catherine Shu, “YouTube Punishes Alex Jones’ Channel for Breaking Policies against Hate Speech and Child Endangerment,” Techcrunch, July 2018.

167. “Hateful Conduct Policy,” Twitter.