The internet as we know it today is under serious threat on more than one front. Okay, that sounds a bit hyperbolic, but let’s consider what policymakers have been up to lately. This week, the United States’ rules protecting net neutrality were officially rolled back thanks to the FCC’s decision to completely reverse course from the Open Internet Order put into place in 2015. Deservedly, much attention has been paid to the loss of net neutrality regulations, as critics have warned that non-commercial voices may be lost, while those who can afford to pay higher fees will have their messages prioritized. While this battle plays out, it’s also worth considering other threats to the way we communicate online, such as an EU copyright proposal that threatens user-generated content, including remixes and mashups that may take a variety of forms in videos, songs, and memes.

Today, users can upload content to blogs, video-hosting websites, personal websites, and more. Although there is always the potential for copyright infringement, more often than not these uses are completely legitimate under existing limiting and exceptions to copyright. In order for the internet to function as a democratic platform on which user-generated content has flourished, platforms must be able to trust that users are generally posting lawful, non-infringing content. If platforms or service providers were forced to police the internet and ensure that infringing content does not make its way online, the volume of new content being posted to the web could come to a grinding halt.

And yet, that’s exactly what a new EU proposal would do. For platforms that host user-uploaded materials, these services would now be responsible for ensuring that posted material does not infringe copyright. By requiring sites to police material, rather than relying on rightholders to enforce their own rights, liability concerns could cause platforms to be overly zealous or cautious. Essentially, it could do away with existing safe harbor protections and place heavy burdens on platforms for user-generated content.

With an obligation to filter out content that may be infringing and a huge volume of content being created and shared on a daily basis, many services will no doubt turn to automation to police their sites. While some services already have content ID software in place, smaller or new platforms may not have the capacity to do so. Additionally, there are a number of problems with these automated systems. Just because copyrighted content might be included in something that’s uploaded, doesn’t mean that it’s prohibited: not all copying is copyright infringement.

Content ID — whether being used by rightholders to identify infringing works that are already on the internet or hosting providers trying to prevent such works from being posted in the first place — can wrongly flag material that is non-infringing. False identifications that result either in the blocking or takedown of content harm freedom of speech and the sharing of new cultural works.

For example, simple mistakes can be made based on the use of the same or similar titles or names of the artist/author. In 2013, Fox sent a takedown notice against a book by Cory Doctorow because the book and one of Fox’s hit television shows shared the same name: Homeland.

Fair use also isn’t taken into account in software identification of copyrighted content. In the high-profile “Dancing Baby” case, a mother posted a video of her children dancing to a clip of a song by Prince and was hit with a takedown notice. The Ninth Circuit ruled that that copyright holders must consider fair use before issuing a takedown notice, but this isn’t possible with automated reviews. Computer programs simply cannot make accurate assessments over whether a use is fair, if it is being used as parody or criticism, a remixed work, a de minimis use of copyrighted works, or if it is indeed infringing.

Remember the Super Bowl ad Chrysler released this year, using the words of Martin Luther King, Jr. to sell a Dodge Ram truck? One viewer cleverly replaced the original audio of the ad, which highlighted the importance of service, with audio from another portion of the same MLK speech that instead criticized consumer culture, including a line calling out Chrysler by name. This version of the ad was flagged by YouTube’s content ID and taken down, but was later restored because it is obviously a fair use. The very fact that it was removed at all, though, demonstrates the inability of automated systems to determine whether a use is criticism, parody or some other non-infringing use.

These are just a couple of examples of instances where automated content identification systems would capture non-infringing materials. The Electronic Frontier Foundation (EFF) has a “Takedown Hall of Shame” that provides other high-profile instances.

While the EU’s proposed copyright directive wouldn’t necessarily prohibit what are otherwise legitimate uses of copyright, it could nonetheless have detrimental impacts by forcing platforms to use automated filters or risk liability. In particular, user-generated content including remixed videos and songs, blog posts, mashup art, criticisms, citizen journalists, and more would all come under threat.

As noted in a statement signed by Internet luminaries including Tim Berners-Lee, Jimmy Wales, Tim Wu, and others, if the burdensome requirements of the EU’s proposal had existed during the early years of the internet, “it is unlikely that it would exist today as we know it. The impact of Article 13 would also fall heavily on ordinary users of Internet platforms — not only those who upload music or video (frequently in reliance upon copyright limitations and exceptions, that Article 13 ignores), but even those who contribute photos, text, or computer code to open collaboration platforms such as Wikipedia and GitHub.”

While ensuring that rightholders can protect their copyrighted works is important to a functioning intellectual property system, the EU’s proposal is a bit of overkill that does not properly balance the interests of rightholders with those of users. The Legal Affairs (JURI) Committee of the European Parliament is scheduled to vote on this proposal next week, the outcome of which could have serious impacts on how we use the Internet today. Several changes to the proposal have been made to blunt these threats — including carveouts for particular platforms like Wikipedia — but still do not fully address the flaws in the directive and could ultimately make the law much more complicated. Requiring platforms to police content is an overly broad approach that would cause greater harms. Hopefully, members of the committee will realize that the proposed copyright reform uses a sledgehammer where a fly swatter would do.

Krista L. Cox is a policy attorney who has spent her career working for non-profit organizations and associations. She has expertise in copyright, patent, and intellectual property enforcement law, as well as international trade. She currently works for a non-profit member association advocating for balanced copyright. You can reach her at kristay@gmail.com.