Flashback Much hilarity has greeted Eric Schmidt’s deeply sincere “outrage” at his “discovery” that the NSA was spying on Google. For example, Vanity Fair pointed Mr Schmidt to some helpful Google searches.

But the NSA is merely treading in some well-worn footsteps – some of which were made by Google itself. Let us refresh your memory of one of the most prescient and chilling pieces of prediction in the last decade. For all this was forecast here at The Register in early 2004 – nine years ago.

In early 2004, Google launched Gmail. Gmail performed an automated interception of your email, and – having scanned the contents and guessed at its meaning – ran contextual advertising alongside it.

Former security advisor Mark Rasch, an attorney who had worked in the Department of Justice’s cyberfraud department during the Clinton administration, and was writing for Security Focus, raised a very interesting problem. If Google could search through and read your email without explicit legal authorisation, then surely the security agencies could do the same.

Rasch argued that Google had redefined the words “read” ("learn the meaning") and “search”, which protect citizens, when it unveiled its new contextual ads service. It had removed explicit human agency from the picture. An automated search wasn’t really a search, and its computers weren’t really "reading".

“This is a dangerous legal precedent which both law enforcement and intelligence agencies will undoubtedly seize upon and extend, to the detriment of our privacy,” forecast Rasch, here, in June 2004.

“Google will likely argue that its computers are not ‘people’ and therefore the company does not ‘learn the meaning’ of the communication. That's where we need to be careful. We should nip this nonsensical argument in the bud before it's taken too far, and the federal government follows.”

Remarkably, Rasch even suggested where the security services might most effectively put this into practice.

“Imagine if the government were to put an Echelon-style content filter on routers and ISPs, where it examines billions of communications and 'flags' only a small fraction (based upon, say, indicia of terrorist activity). Even if the filters are perfect and point the finger only completely guilty people, this activity still invades the privacy rights of the billions of innocent individuals whose communications pass the filter,” he wrote. “Simply put, if a computer programmed by people learns the contents of a communication, and takes action based on what it learns, it invades privacy.”

Well, fancy that.

Rasch returned to the subject several times over the years – for example here, where he discussed the implications of cloud computing.

But very few people wanted to know. Examining the ethics of internet giants is apparently vulgar. Free email, free cloud services, and bringing freedom to oppressed regimes - who wants to look a gift horse in the mouth? Through a network of think-tanks and “internet freedom” groups – it’s a substantial donor to Public Knowledge, the Electronic Frontier Foundation and many others – Google even maintained the illusion that it was on your side.

Yet after pioneering an ethical loophole in the public imagination that government agencies jumped through, it spent the following decade lobbying furiously to weaken citizens' property rights. It’s extraordinary what a small amount of money can buy.

Pundits and punters and politicians love to hear how Google is creating clever machines - but they seem loathe to accept that there's a Wizard behind the curtain, or that said Wizard may have a ruthless focus on its own self-interest.

“It's just bad public policy ... and perhaps illegal,” fretted Schmidt to the WSJ. “There clearly are cases where evil people exist, but you don't have to violate the privacy of every single citizen of America to find them.”

It’s too late, Eric. Google not only made that bed, it set up the bed store. ®