



Google’s recent decision to delist “revenge porn” from its search results is a big deal, and not just for victims. Beyond opposing harmful conduct that disproportionately targets women, Google has essentially demonstrated how something akin to the European Union’s right to be forgotten can, and should, work in the US.

Some Americans have panicked over Europe’s woefully misnamed right to be forgotten, anxious at the thought of the “biggest threat to free speech online” “erasing history” and “breaking the internet”. But such a right doesn’t have to exact an exorbitant price tag, and it can come about in many different ways.

In the case of the right to be forgotten, Google has shown that the world won’t be knocked off its axis if the company goes beyond protecting financially relevant information (copyrighted works, social security numbers, and bank information), and takes aggressive steps to remove links to socially relevant information that can harm autonomy, reputation, and emotional well-being.

As the debate over online privacy in the US advances, there are three important things to keep in mind.

1) Both government and industry must invest in data protection rights



The First Amendment restricts the ways the US government can oblige companies to provide citizens with data protection rights. The corporate free speech interests of Google and others will likely ensure that the US cannot mimic data correction and erasure rights in the EU. What this means is that the public must demand that Google – along with other online intermediaries – exerts leadership and avoids adopting a mentality that’s concerned more with legal compliance than social good.



Even cynics must admit that Google made a voluntary choice to take action against revenge porn at a global level. This is the case even if the policy deflects some attention away from the tougher questions raised by delisting requests from EU citizens, and even if it follows moves by Reddit, Twitter, and other prominent companies in responding to online harassment. Evidence now suggests that if the public keeps applying pressure, industry and government can be partners in ensuring that it is hard for undesirable parties to obtain some of the information we have privacy interests in.



Seen from this perspective, good can come from the fact that Google is not legally required to make free speech exceptions for links to information some would deem newsworthy. Even countries with broader privacy rights like those in the EU could benefit from an invested industry.

2) Personal data rights will evolve through information-specific categories

The entire regulatory framework for protecting privacy in the US is fragmented and largely focused on safeguarding specific kinds of sensitive information, like health, financial, and education records. The right to be forgotten in the US will not be any different.



To be sure, a category-based approach to privacy has its drawbacks. It fuels an incremental orientation towards privacy protection, and every victory puts us at risk of being too complacent to go further and take up far-reaching and robust reforms.

But a category-based approach also has numerous advantages. Different solutions, ranging from expunging a minor’s criminal record to deleting stale credit history and delisting non-consensual pornography, can be worked out independently of one another. Clear justifications can be given for why special types of information are so problematic they should be obfuscated or removed from different systems. At the end of the day, categories of information that clearly do not belong in the wrong hands, such as non-consensual pornography, can be handled with comparative ease. Meanwhile, grey areas, like what to do about privacy-invasive public records, can be addressed later on as the discourse surrounding them matures.

Google’s calculus for delisting seems to target information that is both harmful and without legitimate value to the public. This is a useful criterion, and the cases it covers can be expanded. As new technologies evolve and we become clearer about which categories of information should not be ripe for the picking, company policy or law might obscure additional classifications, such as biometrics and mugshots used to exploit people.

3) The right to be forgotten in the US will be, in effect, the right to obscurity

The entire history of “forgetting” in the US is really about making things hard to find. A minor’s criminal records and a person’s outdated credit activity aren’t obliterated; indeed, many people know about them. In effect, then, what Google’s new policy does is give people a limited right to obscurity.

Obscurity protections are not perfect and, as Google acknowledges, revenge porn will remain available on websites. But delisting it from search engines can make information harder to access and interpret. Such a burden can effectively deter many people from doing harmful things. According to University of Miami law professor Mary Anne Franks, given Google’s global dominance, delisting revenge porn “will mean fewer victims of revenge porn in the future”.

The ideals driving the poorly-described right to be forgotten are about striking the right balance between information availability and obscurity. Although Google claims that its stance on revenge porn is but “narrow and limited policy,” the company is actually paving the way towards something far more substantial in the US.

Woodrow Hartzog is an associate professor at Samford University’s Cumberland School of Law. Follow him on Twitter @hartzog.

Evan Selinger is an associate professor of philosophy at Rochester Institute of Technology. Follow him on Twitter @EvanSelinger.

How Google determined our right to be forgotten

Tech companies and social networks need an ethics body to rebuild trust

What did the media miss with the ‘right to be forgotten’ coverage?

What the Google ruling tells us about our digital delusion