Three years ago, we warned of a string of dangerous new policy proposals on the horizon. Under these proposals, platforms would be forced to implement copyright bots that sniffed all of the media that users uploaded to them, deleting your uploads with no human review.

It’s happening.

The European Parliament is weeks away from a vote on Article 13, which would force most platforms, services, and communities that host user uploads to install filters to block uploads that seem to match materials in a database of copyrighted works. If the filter detected enough similarity between your video and something from the list of copyrighted works, your video would be banned. Hollywood lobbyists have proposed similar measures here in the U.S.

When platforms over-rely on automated filters to enforce copyright, users must cater their uploads to those filters.

There’s a lot to say about the dangers of Article 13—how it would censor the whole world’s Internet, not just Europe’s; how it would give an unfair advantage to big American tech companies; how it would harm the artists it was supposedly intended to help—but there’s another danger in Article 13 and other proposals to mandate filtering the Internet: they undermine our fair use rights. When platforms over-rely on automated filters to enforce copyright, users must cater their uploads to those filters.

If you’ve ever seen the message, “This video has been removed due to a complaint from the copyright owner,” you’re familiar with YouTube’s Content ID system. Built in 2007, Content ID lets rightsholders submit large databases of video and audio fingerprints and have YouTube continually scan new uploads for potential matches to those fingerprints. Despite its flaws and frequent false positives, Content ID has become the template for copyright bots on other online platforms.

It’s also served as a persistent thorn in the side of YouTube creators—particularly those who make fair use of copyrighted works in their videos. As one creator who makes pop culture criticism videos noted, “I’ve been doing this professionally for over eight years, and I have never had a day where I felt safe posting one of my videos even though the law states I should be safe posting one of my videos.”

It’s easy to see the impact that Content ID has had on the YouTube community—a simple search reveals hundreds of videos about how to avoid a Content ID takedown, with litanies of guidelines about keeping clips to a certain length, adding a colored border to them, or keeping the copyrighted content in a certain corner of the screen.

That’s the problem. The beauty of fair use is its inherent flexibility. The law does not provide specific rules about how long of a clip must be for you to use it in your parody or criticism or whether it can take up the full screen. But in a filtered Internet, the algorithms create new restrictions on our online speech. The danger of mandatory filtering is that machines will replace human judgment.

While European “fair dealing” law doesn’t have the same flexibilities as U.S. fair use, it does allow more than a dozen exceptions and limitations to copyright, including protected uses like caricature, parody, criticism, and incidental inclusion of a copyrighted work—uses that a robot simply can’t be trained to reliably recognize.

Implemented thoughtfully, copyright bots can serve as a useful aid to human review, flagging uploads that demand a serious fair use analysis. But the current proposals put forth by big media companies take humans out of the equation. In doing so, they really take free speech out of the equation.

This week is Fair Use Week, an annual celebration of the important doctrines of fair use and fair dealing.