History is riddled with examples where attempts to achieve one outcome actually led to the opposite result. In May, the European Court of Justice (ECJ) ruled that Europeans have the “right to be forgotten,” the ability to request search engines to remove links from queries associated with their names if those results are irrelevant, inappropriate, or outdated. Just as Prohibition famously increased alcohol consumption, it would seem the “right to be forgotten,” while intended to increase online privacy, may actually have the opposite effect, both by cataloging shameful information and incentivizing individuals to publicize the very materials people want forgotten.

Since the decision, Google has scrambled to meet Europe’s demands by creating an online form to process removal requests and hiring new personnel to handle compliance. When individuals want information removed about themselves, they must submit verification of their identity, provide the URLs to be removed, and justify why they should be taken down. Google then verifies that the submitted information is accurate and meets the criteria for removal. Then, if the company decides to take the link down, it notifies the website where the content was posted of the removal. Google keeps records of these removed links and the requester’s identity in order to maintain the new search results and to respond to appeals. This means Google is creating a database where it holds all of the information meant to be forgotten and to whom that information is associated.

This brings to mind the idea of a “Database of Ruin,” a term coined by Paul Ohm, a professor at the University of Colorado Law School. Ohm describes a future where a massive global database collects so much information that it eventually learns at least one deeply personal secret about everyone in the world and causes society great harm. While the very idea of a “Database of Ruin” sounds like an idea from a Dr. Who villain, Europe may actually be bringing such an absurdity to life. Most of us have something online that we would not like everyone else knowing about, but actually finding that information is like looking for a needle in a field of haystacks. With the “right to be forgotten,” Europe is now placing all of its needles in a single “haystack,” effectively collecting its dirty laundry in one place. This is an unnerving concept given the fact that hundreds of US companies were exposed to data breaches in 2013. Let’s hope this database is never hacked!

Furthermore, the “right to be forgotten” has led to more notoriety for these removed links than if the search results had just been left alone. This result is often referred to as the Streisand Effect, named after singer Barbra Streisand who, out of concerns for her privacy, sued a photographer that had taken a picture of her home in California. Prior to the lawsuit, the image had been downloaded 6 times from the photographer’s website, but the lawsuit’s resulting publicity drove more than 420,000 visits in the following month. In the same way, when websites are notified of Google’s takedowns, they often draw even more attention to the issue. Recently The Telegraph reported the removal of an article on its website from Google’s searches, which begat a cycle of more removals and more reporting on that article. The loser in this example was not The Telegraph or Google, but the person who wanted that information to be private in the first place.

The European Union has voiced its displeasure over this effect, saying Google’s process is to blame. However, if the European Union took control and started gathering this information in its own database, privacy groups would be up in arms about government overreach. Additionally, if Google stopped notifying websites of these takedowns, they would be unable to appeal that the removed links were actually in the public interest, as The Guardian did in July, subsequently reversing Google’s initial decision. This type of feedback is imperative for transparency and accountability in the system.

It is important to note that this information is not actually removed from the Internet, just search engine results, and in many cases this may increase the privacy of some individuals because the information is less accessible. Google’s implementation of this law is not the problem, as gathering this data and notifying each website of a takedown is the only way to ensure accountability and accuracy in the process. The problem is the “right to be forgotten” itself, which in its very implementation—by any organization, public or private—often defeats the purpose and leads to unintended effects that can create more privacy harms for individuals with this “right” than those without it.

Daniel Castro contributed to this blog. Photo credit: Alexander Key