I found last week’s BBC flagship ‘Moral Maze’ podcast frustrating. The program sounded great at first - cross-examining how surveillance and human freedom can be squared in a democracy. “Witnesses” (experts on AI and data privacy) came in and were grilled by quite a tough panel quick firing sceptical questions. One key point especially stuck with me: the notion that individuals are partly responsible for the data mess our societies are now in (as in “it’s people’s own fault if they are so willing to give up any pretence of privacy online”).

The line of inquiry defending pervasive surveillance in the name of “individual freedom” and individual consent is tired and should be retired.

Data isn’t the new oil, it’s the new CO2. It’s a common trope in the data/tech field to say that “data is the new oil”. The basic idea being – it’s a new resource that is being extracted, it is valuable, and is a raw product that fuels other industries. But it also implies that data is inherently valuable in and of itself and that “my data” is valuable, a resource that I really should tap in to.

In reality, we are more impacted by other people’s data (with whom we are grouped) than we are by data about us. As I have written in the MIT Technology Review - “even if you deny consent to ‘your’ data being used, an organisation can use data about other people to make statistical extrapolations that affect you.” We are bound by other people’s consent. Our own consent (or lack thereof) is becoming increasingly irrelevant. We won’t solve the societal problems pervasive data surveillance is causing by rushing through online consent forms. If you see data as CO2, it becomes clearer that its impacts are societal not solely individual. My neighbour’s car emissions, the emissions from a factory on a different continent, impact me more than my own emissions or lack thereof. This isn’t to abdicate individual responsibility or harm. It’s adding a new lens that we too often miss entirely.

We should not endlessly be defending arguments along the lines that “people choose to willingly give up their freedom in exchange for free stuff online”. The argument is flawed for two reasons. First the reason that is usually given - people have no choice but to consent in order to access the service, so consent is manufactured. We are not exercising choice in providing data but rather resigned to the fact that we have no choice in the matter.

The second, less well known but just as powerful, argument is that we are not only bound by other people’s data; we are bound by other people’s consent. In an era of machine learning-driven group profiling, this effectively renders my denial of consent meaningless. Even if I withhold consent, say I refuse to use Facebook or Twitter or Amazon, the fact that everyone around me has joined means there are just as many data points about me to target and surveil. The issue is systemic, it is not one where a lone individual can make a choice and opt out of the system. We perpetuate this myth by talking about data as our own individual “oil”, ready to sell to the highest bidder. In reality I have little control over this supposed resource which acts more like an atmospheric pollutant, impacting me and others in myriads of indirect ways. There are more relations - direct and indirect - between data related to me, data about me, data inferred about me via others than I can possibly imagine, let alone control with the tools we have at our disposal today.

Because of this, we need a social, systemic approach to deal with our data emissions. An environmental approach to data rights as I’ve argued previously. But first let’s all admit that the line of inquiry defending pervasive surveillance in the name of “individual freedom” and individual consent gets us nowhere closer to understanding the threats we are facing.