When Chairman Ajit Pai had the Federal Communications Commission reconsider network neutrality in 2017, he made his agenda clear: He wanted to take a “weed whacker” to the Obama-era open-internet rules.

The rules he wanted to whack, however, were formed in 2015 in a process that elicited a record-breaking number of comments from the public—over 4 million, more than any other regulatory inquiry in U.S. history. The overwhelming majority of them were in support of net neutrality rules to prevent internet providers from blocking, slowing down, or speeding up access to websites or charging sites to reach users at faster speeds. But despite public support for the relatively new rules, Pai’s 2017 bid to undo net neutrality was ultimately successful. And unsurprisingly, it broke the record for public participation in a regulatory rule-making once again—but this time, the process appeared to be clouded by impropriety. A new BuzzFeed report makes it look even sketchier. It underscores just how vulnerable the federal government’s commenting process is—and what’s at risk if it doesn’t get fixed.

When a federal regulatory agency wants to change its rules or craft new policy, it typically has to go through a “notice and comment” process in which the public is invited to weigh in on the impact of the rule change. Thousands of rules are promulgated a year this way, generally receiving anywhere from a few dozen to a few thousand comments. It’s very, very rare for the notice and comment process to attract millions of responses—much less 22 million comments, as the effort to undo the net neutrality rules did in 2017.

As the comments came pouring in throughout the second half of that year, it quickly became clear that something was amiss. A little over a week after the comment period opened, John Oliver dedicated a 20-minute segment of his HBO show to the issue, imploring users to make their voices heard to try to prevent, as he put it, “cable company fuckery.” The comments flooded the FCC, so much so that the agency’s electronic filing system shut down—as an investigation by the FCC’s inspector general determined when he looked into the matter. When the system initially went down, however, Pai incorrectly told Congress it was because of a mysterious cyberattack. By the end of May, Vice found that comments in favor of the FCC repeal were being posted under the names of dead people. Further investigations found that comments in favor of repealing net neutrality were also coming from stolen identities, including those of lawmakers, like Oregon Sen. Jeff Merkley and Arizona Rep. Ruben Gallego, who had fake comments posted on their behalf advocating against net neutrality. Bots were posting comments. Hundreds of thousands of comments were coming in from Russian email addresses. Still, despite these improprieties, more than 99 percent of the organic comments—meaning the evidence points to them being from actual people and not prewritten—were found to be in favor of preserving net neutrality.

Now, according to the new investigation from BuzzFeed, it appears that more than a million of the suspicious comments filed to the FCC were the product of a shady outside firm hired by political campaigns using people’s information stolen from a data breach.

With this many snafus, it’s clear that the online comment system at the FCC, and very likely other public agencies, is easily exploitable and likely broken to the point that it’s causing more harm than good. Though it may seem like an arcane issue, it’s a big problem. When it comes to crafting new federal policies, the notice and comment process might be the only direct way a member of the public can have a voice in federal decision-making. Regulators are legally required to consider opinions shared by Americans. Though policymakers can’t read every comment if millions are posted, comments can be tallied to help reshape policy proposals. Take what happened in 2014, when the FCC first proposed new net neutrality rules. Back then, under the Obama-era FCC, the original proposal would have permitted the internet providers to offer websites to reach users at faster speeds but barred any blocking of websites. This would have created a two-tiered internet. But the public spoke out in the comment process. After immense pressure, the FCC rewrote the rules to bar any kind of paid prioritization—and that version of the rules finally passed at the end of the year. In 2004, a nonprofit where I used to work, Prometheus Radio Project, even sued the FCC after it failed to consider demonstrated public opinion through its comment process when crafting new rules about media ownership—and won. The agency eventually was tasked with going back and holding six public hearings across the country to better understand the impact of its rules on diverse communities across the country.

It’s not surprising that the FCC’s comment process has become a mess. There’s currently no CAPTCHA system asking you to prove you’re a person when posting a comment. It’s incredibly easy to write a web application to automatically file comments. Pai has even refused to delete fraudulent comments on the net neutrality docket when asked by victims of identity theft to do so. Despite reports from more than a year ago that the agency plans to overhaul its comment system, it’s not clear that anything has actually been done. On Thursday, FCC Commissioner Jessica Rosenworcel called the FCC’s “continued silence” on its broken comment system “shameful.”

This isn’t only a problem at the FCC. The Department of Labor has had fraudulent comments filed, as have the Consumer Financial Protection Bureau, Federal Energy Regulatory Commission, and others. A Wall Street Journal investigation found thousands of fraudulent comments on agency websites. This problem is endemic, and it’s not being addressed.

The answer to this mess isn’t ending the comment process. We need ways to weigh in on policies that affect our lives beyond Election Day, especially when it comes to decisions made by unelected officials at regulatory agencies. The answer is to fix the broken system—fast. That requires understanding how fake comments are filed and working with technologists, consumer advocates, and other stakeholders to ferret out ways the system can be abused and build a better one. Maybe a new system could require posters to use two-factor authentication. Or perhaps the agencies should build a detection system to weed out duplicates. When the public is asked to participate online, there will always be actors who try to muck it up. But democracy is messy. And that requires constant work to protect it.

Future Tense is a partnership of Slate, New America, and Arizona State University that examines emerging technologies, public policy, and society.