Shoshana Zuboff’s “The Age of Surveillance Capitalism” is already drawing comparisons to seminal socioeconomic investigations like Rachel Carson’s “Silent Spring” and Karl Marx’s “Capital.” Zuboff’s book deserves these comparisons and more: Like the former, it’s an alarming exposé about how business interests have poisoned our world, and like the latter, it provides a framework to understand and combat that poison. But “The Age of Surveillance Capitalism,” named for the now-popular term Zuboff herself coined five years ago, is also a masterwork of horror. It’s hard to recall a book that left me as haunted as Zuboff’s, with its descriptions of the gothic algorithmic daemons that follow us at nearly every instant of every hour of every day to suck us dry of metadata. Even those who’ve made an effort to track the technology that tracks us over the last decade or so will be chilled to their core by Zuboff, unable to look at their surroundings the same way.

Cover: Public Affairs Books

An unavoidable takeaway of “The Age of Surveillance Capitalism” is, essentially, that everything is even worse than you thought. Even if you’ve followed the news items and historical trends that gird Zuboff’s analysis, her telling takes what look like privacy overreaches and data blunders, and recasts them as the intentional movements of a global system designed to violate you as a revenue stream. “The result is that both the world and our lives are pervasively rendered as information,” Zuboff writes. “Whether you are complaining about your acne or engaging in political debate on Facebook, searching for a recipe or sensitive health information on Google, ordering laundry soap or taking photos of your nine-year-old, smiling or thinking angry thoughts, watching TV or doing wheelies in the parking lot, all of it is raw material for this burgeoning text.” Tech’s privacy scandals, which seem to appear with increasing frequency both in private industry and in government, aren’t isolated incidents, but rather brief glimpses at an economic and social logic that’s overtaken the planet while we were enjoying Gmail and Instagram. The cliched refrain that if you’re “not paying for a product, you are the product”? Too weak, says Zuboff. You’re not technically the product, she explains over the course of several hundred tense pages, because you’re something even more degrading: an input for the real product, predictions about your future sold to the highest bidder so that this future can be altered. “Digital connection is now a means to others’ commercial ends,” writes Zuboff. “At its core, surveillance capitalism is parasitic and self-referential. It revives Karl Marx’s old image of capitalism as a vampire that feeds on labor, but with an unexpected turn. Instead of labor, surveillance capitalism feeds on every aspect of every human’s experience.” Zuboff recently took a moment to walk me through the implications of her urgent and crucial book. This interview was condensed and edited for clarity. I was hoping you could say something about whatever semantic games Facebook and other similar data brokers are doing when they say they don’t sell data. I remember sitting at my desk in my study early in 2012, and I was listening to a speech that [Google’s then-Executive Chair] Eric Schmidt gave somewhere. He was bragging about how privacy conscious Google is, and he said, “We don’t sell your data.” I got on the phone and started calling these various data scientists that I know and saying, “How can Eric Schmidt say we don’t sell your data, in public, knowing that it’s recorded? How does he get away with that?” It’s exactly the question I was trying to answer at the beginning of all this. Let’s say you’re browsing, or you’re on Facebook putting stuff in a post. They’re not taking your words and going into some marketplace and selling your words. Those words, or if they’ve got you walking across the park or whatever, that’s the raw material. They’re just secretly scraping your private experience as raw material, and they’re stockpiling that raw material, constantly flowing through the pipes. They sell prediction products into a new marketplace. What are those guys really buying? They’re buying predictions of what you’re gonna do. There are a lot of businesses that want to know what you’re going to do, and they’re willing to pay for those predictions. That’s how they get away with saying, “We’re not selling your personal information.” That’s how they get away also with saying, as in the case of [recently implemented European privacy law] GDPR, “Yeah, you can have access to your data.” Because the data they’re going to give you access to is the data you already gave them. They’re not giving you access to everything that happens when the raw material goes into the sausage machine, to the prediction products. Do you see that as substantively different than selling the raw material? Why would they sell the raw material? Without the raw material, they’ve got nothing. They don’t want to sell raw material, they want to collect all of the raw material on earth and have it as proprietary. They sell the value added on the raw material. It seems like what they’re actually selling is way more problematic and way more valuable. That’s the whole point. Now we have markets of business customers that are selling and buying predictions of human futures. I believe in the values of human freedom and human autonomy as the necessary elements of a democratic society. As the competition of these prediction products heats up, it’s clear that surveillance capitalists have discovered that the most predictive sources of data are when they come in and intervene in our lives, in our real-time actions, to shape our action in a certain direction that aligns with the kind of outcomes they want to guarantee to their customers. That’s where they’re making their money. These are bald-faced interventions in the exercise of human autonomy, what I call the “right to the future tense.” The very idea that I can decide what I want my future to be and design the actions that get me from here to there, that’s the very material essence of the idea of free will.

“These are bald-faced interventions in the exercise of human autonomy.”

I write about the Senate committee back in the ’70s that reviewed behavioral modification from the point of view of federal funding, and found behavioral mod a reprehensible threat to the values of human autonomy and democracy. And here we are, these years later, like, La-di-da, please pass the salt. This thing is growing all around us, this new means of behavioral modification, under the auspices of private capital, without constitutional protections, done in secret, specifically designed to keep us ignorant of its operations. When you put it like that, it sure makes the question of whether Facebook is selling our phone number and email address kind of quaint. Indeed. And that’s exactly the kind of misdirection that they rely on. This made me reflect, not totally kindly, on the years I spent working at Gizmodo covering consumer tech. No matter how skeptical I tried to remain then, I look back on all the Google and Facebook product announcements that we covered just as “product news.” [The press is] up against this massive juggernaut of private capital aiming to confuse, bamboozle, and misdirect. A long time ago, I think it was 2007, I was already researching this topic and I was at a conference with a bunch of Google people. Over lunch I was sitting with some other Google executives and I asked the question, “How do I opt out of Google Earth?” All of a sudden, the whole room goes silent. Marissa Mayer, [a Google vice president at the time], was sitting at a different table, but she turned around and looked at me and said “Shoshana, do you really want to get in the way of organizing and making accessible the world’s information?” It took me a few minutes to realize she was reciting the Google mission statement.

Photo: Michael D. Wilson

The other day, I was looking through the section of my Facebook account that actually lists the interests that Facebook has ascribed to you, the things it believes you’re into. I did the same with Twitter — and I was struck in both cases by how wrong they were. I wonder if you find it reassuring that a lot of this stuff seems to be pretty clunky and inaccurate right now. I think there’s a range here. Some of it still feels clunky and irrelevant and produces in us perhaps a sigh of relief. But then on the other end, there are things that are uncannily precise, really hitting their mark at the moment they should be. And because we only have access to what they let us see, it’s still quite difficult for us to judge precisely what the range of that [accuracy] is. What about the risk of behavioral intervention based on false premises? I don’t want a company trying to intervene in the course of my daily life based on the mistaken belief that I’m into fly fishing any more than I want them to intervene based on a real interest I have. This is why I’m arguing we’ve got to look at these operations and break them down. They all derive from a fundamental premise that’s illegitimate: that our private experience is free for the taking as raw material. So it’s almost secondary if their conclusions are right or wrong about us. They’ve got no right to intervene in my behavior in the first place. They have no right to my future tense. Is there such a thing as a good ad in 2019? Is it even possible to implement a form of online advertising that isn’t invasive and compromising of our rights? An analogy I would draw would be negotiating how many hours a day a 7-year-old can work in a factory. I take that as a no. We’re supposed to be contesting the very legitimacy of child labor.