One jaw-dropping moment during the Senate’s hearing on Tuesday came when Sen. Ted Cruz asked Facebook CEO Mark Zuckerberg, “Does Facebook consider itself a neutral public forum?” Unsatisfied by Zuckerberg’s response that Facebook is a “platform for all ideas,” Sen. Cruz continued, “Are you a First Amendment speaker expressing your views, or are you a neutral public forum allowing everyone to speak?”

When members of Congress recite myths about how Section 230 works, it demonstrates a frightening lack of seriousness about protecting our right to speak and gather online.

After more back-and-forth, Sen. Cruz said, “The predicate for Section 230 immunity under the CDA is that you’re a neutral public forum. Do you consider yourself a neutral public forum, or are you engaged in political speech, which is your right under the First Amendment?” It was a baffling question. Sen. Cruz seemed to be suggesting, incorrectly, that Facebook had to make a choice between enjoying protections for free speech under the First Amendment and enjoying the additional protections that Section 230 offers online platforms.

Online platforms are within their First Amendment rights to moderate their online platforms however they like, and they’re additionally shielded by Section 230 for many types of liability for their users’ speech. It’s not one or the other. It’s both.

Indeed, one of the reasons why Congress first passed Section 230 was to enable online platforms to engage in good-faith community moderation without fear of taking on undue liability for their users’ posts. In two important early cases over Internet speech, courts allowed civil defamation claims against Prodigy but not against Compuserve; since Prodigy deleted some messages for “offensiveness” and “bad taste,” a court reasoned, it could be treated as a publisher and held liable for its users’ posts. Former Rep. Chris Cox recalls reading about the Prodigy opinion on an airplane and thinking that it was “surpassingly stupid.” That revelation led to Cox and then Rep. Ron Wyden introducing the Internet Freedom and Family Empowerment Act, which would later become Section 230.

The misconception that platforms can somehow lose Section 230 protections for moderating users’ posts has gotten a lot of airtime lately—even serving as the flawed premise of a recent Wired cover story. It’s puzzling that Sen. Cruz would misrepresent one of the most important laws protecting online speech—particularly just a few days after he and his Senate colleagues voted nearly unanimously to undermine that law. (For the record, it’s also puzzling that Zuckerberg claimed not to be familiar with Section 230 when Facebook was one of the largest Internet companies lobbying to undermine it.)

The context of Sen. Cruz’s line of questioning offers some insight into why he misrepresented Section 230: like several Republican members of Congress in both hearings, Sen. Cruz was raising concerns about Facebook allegedly removing posts that represented conservative points of view more often than liberal ones.

There are many good reasons to be concerned about politically motivated takedowns of legitimate online speech. Around the world, the groups silenced on Facebook and other platforms are often those that are marginalized in other areas of public life too.

It’s foolish to suggest that web platforms should lose their Section 230 protections for failing to align their moderation policies to an imaginary standard of political neutrality. Trying to legislate such a “neutrality” requirement for online platforms—besides being unworkable—would be unconstitutional under the First Amendment. In practice, creating additional hoops for platforms to jump through in order to maintain their Section 230 protections would almost certainly result in fewer opportunities to share controversial opinions online, not more: under Section 230, platforms devoted to niche interests and minority views can thrive.

What’s needed to ensure that a variety of views have a place on social media isn’t creating more legal exceptions to Section 230. Rather, companies should institute reasonable, transparent moderation policies. Platforms shouldn’t over-rely on automated filtering and unintentionally silence legitimate speech and communities in the process. And platforms should add features to give users themselves—not platform owners or third parties—more control over what types of posts they see.

When Congress passed SESTA/FOSTA, members made the mistake of thinking that they could tackle a real-world problem by shifting more civil and criminal liability to online platforms. When members of Congress recite myths about how Section 230 works, it demonstrates a frightening lack of seriousness about protecting our right to speak and gather online.