Google has published more details about how it’s handling so-called right to be forgotten requests from private individuals using its search engine in Europe — following the European Court of Justice ruling that required it to do so, back in May.

After meeting with European data protection regulators last week — to discuss the implementation of the ruling, which requires search engines to respond to requests from private individuals to de-index outdated or irrelevant links that are returned in a search for their name — Google said it had decided to make public its responses to regulators “in the interests of transparency”.

Prior to publishing the document, details were thin on the ground about how Google was handling and judging requests — beyond it telling journalists that it had a team of paralegals (i.e. non lawyers) “trained” to deal with requests.

Google has also made a lot of noise about an advisory council it has established, which includes Google staff and outside experts, which will be holding public meetings over the coming months to discuss ‘personal privacy vs public interest’ — which is how Google has been couching the debate. That council is currently calling for evidence submissions from the public to inform the debate.

Now we have a detailed document (which runs to 13 pages) to parse in which Google answers questions such as: “What criteria do you use to balance your economic interest and/or the interest of the general public in having access to that information versus the right of the data subject to have search results delisted?”.

Responding to that question, Google of course claims its own economic interest does not come into play when making these rtbf judgements — beyond an “abstract consideration” of a search engine needing to help people find the most relevant information for their query. A claim which, given how over the years Google has been accused of skewing organic search results with links to its own services (getting in trouble with rivals and regulators in the process), is rather ‘self-undermining’, if I can put it that way.

With this new 13-page artifact in the rtbf saga, we’ve gone from one extreme — Google providing hardly any information about its processes for judging individual requests — to the other: Google detailing at length some of the technical processes and judgmental conundrums it claims are involved in making these decisions.

For example, a question about “particular problems” Google has faced in implementing the ruling yields a page long response from Mountain View — with Google waxing lyrical on the difficulties of balancing competing interests when making the judgements.

Google also goes into lengthy detail to justify its decision to inform publishers when it has removed links to content on their sites — a decision which has resulted in media outlets writing new articles about delisted content, thereby resulting in the rtbf ruling causing the opposite effect to that intended (i.e. fresh publicity, not fair obscurity).

Google argues that its decision to inform publishers when it has de-indexed a link to their content is important on transparency grounds, and to avoid complaints. It also claims that the rtbf system could be abused by publishers to try to de-index rivals’ content.

Elsewhere its responses are not so long or detailed. For instance, asked what the average time it takes to process requests Google says only that “it is too early to know what our average time will be, since we are working through a large backlog of requests”.

Obviously this document is not agenda-less. The questions came from the regulators but the answers were crafted by Google, and — given its decision to make the document public — should be viewed as its latest strategic step in seeking to undermine a ruling that, whatever it claims, does have negative implications for a data-harvesting, data-mining business model.

And while no one is saying this stuff is easy — there are undoubtedly complexities in making judgement calls about when information relating to an individual is outdated/irrelevant or whether an individual has a significant public role or not — Google continues to work to dress complexity in the clothes of impossibility.

It’s also worth noting that we’re still relatively in the dark about the human side of this story — which is really the crux and heart of the matter — i.e. we still don’t know (from Google — we’ve had a glimpse from others) very much at all about the sorts of things private individuals are actually wanting de-indexed.

In this document Google provides the same per country request stats and percentages granted figures that it revealed last week, but isn’t saying anything new about the types of requests it’s getting and the relative proportions of those different types.

The focus of the document is purely on process, partly because that’s what the Article 29 Working Party is asking Google about here (being as it is focused on implementation, not the preceived rights and wrongs of the ruling). But also because that’s where Google wants the focus of this issue to remain: on process, not people.

That’s not surprising. Talk too much about individuals and the risk is you end up humanizing the debate.

[Image by Horia Varlan via Flickr]