Just over a year ago, after Edward Snowden's revelations first hit the headlines, I participated in a debate at the Frontline Club with Sir Malcolm Rifkind, the former foreign secretary who is now MP for Kensington and Chelsea and chairman of the intelligence and security committee. Rifkind is a Scottish lawyer straight out of central casting: urbane, witty, courteous and very smart. He's good on his feet and a master of repartee. He's the kind of guy you would be happy to have to dinner. His only drawback is that everything he knows about information technology could be written on the back of a postage stamp in 96-point Helvetica bold.

In the discussion, he stuck closely to the line that, whatever the Americans were doing, he was confident that everything GCHQ was doing on this side of the pond was lawful. In that sense, he was a masterful exponent of the establishment view that everything was in order, that proper and effective oversight arrangements were in place and that the access of the intelligence and security committee to what it needed to know about the activities of the security services had been much improved.

So far, so predictable. But in conversation after the debate, Rifkind said something that really brought me up short. I'm paraphrasing from my notes, but the gist of the exchange was this: even if it were true that the spooks were hoovering up all the data that Snowden claimed, that in itself wasn't grounds for concern, because nobody was actually looking at it. It was only when a specific item had been identified as being of interest that an official – a human being – looked at it. Only then was that data deemed to have been legally "collected".

It was at this point that I realised why our former foreign secretary – and by implication most of his parliamentary colleagues – is having such difficulties dealing with these issues: they come to digital realities with analogue mindsets. It was clear that Rifkind knew little, if anything, about the capabilities of machine intelligence, data mining, network analysis and all the other stuff that computers do with big data. So, for him, it made perfect sense to regulate only the actions of intelligence officers in relation to surveillance data. And it's why William Hague thinks that the fact that we're "only" collecting metadata and not "content" means that there are no grounds for concern about bulk surveillance.

But in fact this misconception – that algorithms are not "agents" in the way that humans are – runs through the surveillance debate like the legend in a stick of Blackpool rock. The argument that although Google's algorithms "read" your emails in order to decide what ads to place alongside them, they're not really reading them, is part of the same genre. So is the contention that the decision to refuse you a loan is not anything personal, just an impersonal decision made by an algorithm. And so is the claim that just because your clickstream – the log of all the websites you've visited – is collected by the NSA, it doesn't mean that the spooks are spying on you.

This is legalistic cant and we could change it at a stroke by updating our legal conceptions of agency. In fact, as the world becomes increasingly controlled by algorithms, we will have little choice but to do so if we want to retain any vestige of social control over this technology. And the funny thing is, there is a good precedent for making the change.

It goes back, oddly enough, to a case, Dartmouth College v Woodward, which was decided by the US supreme court in 1819. In effect, the judgment meant that a corporate body could be a legal "person". This decision, wrote one commentator, Samir Chopra, "facilitated the stabilisation and the expansion of the early American economy; it allowed corporations to sue and to be sued, provided a unitary entity for taxation and regulation and made possible intricate transactions that would have involved a multitude of shareholders. Only by assigning legal personhood to corporations could judges make reasonable decisions about contracts."

At the moment, internet corporations and security agencies are able to hide behind the fact that algorithms – like corporate bodies prior to Dartmouth College v Woodward – are not legal entities. The lesson, says Chopra, is clear: "If you want to violate internet users' privacy, get programs – artificial, not human, agents – to do your dirty work, and then wash your hands of them by using the Google defence: there is no invasion of privacy because no humans accessed your personal details."

Our legislators could do something about this, by enacting laws that drag notions of software's agency and responsibility from the analogue era and into the digital one. But we will probably have to wait for Rifkind's successors to appreciate the need for the change. So don't hold your breath.