Imagine that you’ve just bought a new GPS device for your car. The first time you use it, it works as expected. However, on the second journey, it takes you to an address a few blocks away from where you had wanted to go. On the third trip, you’re shocked when you find yourself miles away from your intended destination, which is now on the opposite side of town. Frustrated, you decide to return home, but when you enter your address, the GPS gives you a route that would have you driving for hours and ending up in a totally different city.

Like any reasonable person, you would consider this GPS faulty and return it to the store – if not throw it out of your car window. Who would continue to put up with a GPS that they knew would take them somewhere other than where they wanted to go? What reason could anyone possibly have for continuing to tolerate such a thing?

No one would put up with this sort of distraction from a technology that directs them through physical space. Yet we do precisely this, on a daily basis, when it comes to the technologies that direct us through informational space. We have a curiously high tolerance for poor navigability when it comes to our attentional GPSs – those technologies that direct our thoughts, our actions and our lives.

Think for a moment about the goals you have set for yourself: your goals for reading this essay, for later today, for this week, even later this year and beyond. If you’re like most people, they’re probably goals such as “learn how to play piano”, “spend more time with family”, “plan that trip I’ve been meaning to take” and so on.

These are real goals, human goals. They’re the kinds of goals that, when we’re on our deathbeds, we’ll regret not having accomplished. If technology is for anything, it’s for helping us pursue these kinds of goals.

Machines serving our needs?





A few years ago, I read an article called Regrets of the Dying. It was about a businesswoman named Bronnie Ware whose disillusionment with the daily slog of her trade had led her to leave it and to start working in a different place: in rooms where people were facing their final hours. She spent her days attending to their needs and listening to their regrets and she recorded the most common things they wished they’d done, or hadn’t done, in life: they’d worked too hard, they hadn’t told people how they felt, they hadn’t let themselves be happy and so on. This, it seems to me, is the proper perspective – the one that’s truly our own. It’s the perspective that our screens and machines ought to help us circle back on again and again: because whatever we might choose to want, nobody chooses to want to regret.

Think back on your goals from a moment ago. Now try to imagine what your technologies’ goals are for you. What do you think they are? I don’t mean the companies’ mission statements and high-flying marketing messages – I mean the goals on the dashboards in their product design meetings, the metrics they’re using to direct your attention, to define what success means for your life. How likely is it that they reflect the goals you have for yourself?

Not very likely, I’m sorry to say. From their perspective, success is almost always defined in the form of low-level “engagement” goals, as they’re often called. These include things like maximising the amount of time you spend with their product, keeping you clicking or tapping or scrolling as much as possible, or showing you as many pages or ads as they can. But these “engagement” goals are petty, subhuman goals. No person has these goals for themselves. No one wakes up in the morning and asks: “How much time can I possibly spend using social media today?” (If there is someone like that, I’d love to meet them and understand their mind.)

What this means, though, is that there’s a deep misalignment between the goals we set for ourselves and the goals that many of our information technologies have for us. This seems to me to be a really big deal and one that no one talks about nearly enough. We trust these technologies to be companion systems for our lives: we trust them to help us do the things we want to do, to become the people we want to be. We trust them to be on our side.

Facebook Twitter Pinterest Illustration: Dominic McKenzie

Yet these wondrous machines, for all their potential, have not been on our side. Our goals have not been their goals. Rather than supporting our intentions, they have largely sought to grab and keep our attention. In their cut-throat rivalry for the increasingly scarce prize of “persuading” us, of shaping our thoughts and actions in accordance with their predefined goals, they have been forced to resort to the cheapest, pettiest tricks in the book, appealing to the lowest parts of us, to the lesser selves that our higher natures perennially struggle to overcome. Furthermore, they now deploy in the service of this attentional capture and exploitation the most intelligent systems of computation the world has seen.

If you wanted to train all of society to be as impulsive and weak-willed as possible, how would you do it? One way would be to invent an impulsivity training device – let’s call it an iTrainer – that delivers an endless supply of informational rewards on demand. You’d want to make it small enough to fit in a pocket or purse so people could carry it anywhere they went. The informational rewards it would pipe into their attentional world could be anything, from cute cat photos to tidbits of news that outrage you (because outrage can, after all, be a reward too). To boost its effectiveness, you could endow the iTrainer with rich systems of intelligence and automation so it could adapt to users’ behaviours, contexts and individual quirks in order to get them to spend as much time and attention with it as possible.

So let’s say you build the iTrainer and distribute it gradually into society. At first, people’s willpower would probably be pretty strong and resistant. The iTrainer might also cause awkward social situations, at least until enough people had adopted it so that it was widely accepted. But if everyone were to keep using it over several years, you’d probably start seeing it work pretty well. Now, the iTrainer might make people’s lives harder to live: it would no doubt get in the way of the effective pursuit of their desired tasks and goals. Even though you created it, you probably wouldn’t let your kids use one. But from the point of view of your design goals, ie making the world more impulsive and weak-willed, it would probably be a roaring success.

Then what if you wanted to take things even further? What if you wanted to make everyone even more distracted, angry, cynical – and even unsure of what, or how, to think? What if you wanted to troll everyone’s minds? You’d probably create an engine, a set of economic incentives, which would make it profitable for other people to produce and deliver these rewards – and, where possible, you’d make these the only incentives for doing so. You don’t want just any rewards to get delivered – you want people to receive rewards that speak to their impulsive selves, rewards that are the best at punching the right buttons in their brains. For good measure, you could also centralise the ownership of this design as much as possible.

If you’d done all this 10 years ago, right about now you’d probably be seeing some interesting results. You’d probably see nine out of 10 people never leaving home without their iTrainer. 1) Almost half its users would say they couldn’t even live without their device. 2) You’d probably see them using it to access most of the information they consume, across every context of life: from politics to education to celebrity gossip and beyond. You’d probably find they were using the iTrainer hundreds of times per day, spending a third of their waking lives engaged with it and it would probably be the first and last thing they engaged with every day.

If you wanted to train society to be as weak-willed and impulsive as possible, you could do a whole lot worse than this. In any event, after unleashing the iTrainer into the world, it would be absurd to claim that it hadn’t produced significant changes in the thoughts, behaviour and habits of its users. After all, everyone would have been part of a rigorous impulsivity training programme for many years.

What’s more, this programme would have effectively done an end-run around many of our other societal systems: it would have opened a door directly on to our attentional capacities and become a primary lens through which we see the world. It would, of course, be a major undertaking to try to understand the full story of the effects this project had had on people’s lives – not only as individuals, but also for society as a whole. It would certainly have had major implications for the way we had been collectively discussing and deciding questions of great importance. And it would certainly have given us, as did previous forms of media, political candidates who were made in its image.

Of course, the iTrainer project would never come anywhere close to passing a research ethics review. Launching such a project of societal reshaping, and letting it run unchecked, would clearly be utterly outrageous. So it’s a good thing this is all just a thought experiment.

The intense competition for our attention





The Canadian media theorist Harold Innis once said that his entire career’s work proceeded from the question: “Why do we attend to the things to which we attend?” When I was working in the technology industry, I realised that I’d been woefully negligent in asking this question about my own attention. When I started doing so, I began to see with new eyes the dashboards, metrics and goals that were driving much of its design. These were the destinations we were entering into the GPSs guiding the lives of millions of human beings. I soon came to understand that the technology industry wasn’t designing products; it was designing users. These magical, general-purpose systems weren’t neutral “tools”; they were purpose-driven navigation systems guiding flesh-and-blood human lives. They were extensions of our attention.

The new challenges we face in the Age of Attention are, on both individual and collective levels, challenges of self-regulation. Having some limits is inevitable in human life. In fact, limits are necessary if we are to have any freedom at all. Like the iTrainer in my thought experiment, digital technologies have transformed our experiential world into a never-ending flow of potential informational rewards. They’ve become the playing field on which everything now competes for our attention. Similar to economic abundance, “if these rewards arrive faster than the disciplines of prudence can form, then self-control will decline with affluence: the affluent (with everyone else) will become less prudent” (as Avner Offer writes in The Challenge of Affluence).

In a sense, information abundance requires us to invert our understanding of what “information technologies” do: rather than overcoming barriers in the world, they increasingly exist to help us put barriers in place. The headphone manufacturer Bose now sells a product called Hearphones that allows the user to block out all sounds in their environment except the ones coming from their desired source – to focus on a conversation in a loud room, for example. The product’s website reads: “Focus on the voices you want to hear – and filter out the noises you don’t – so you can comfortably hear every word. From now on, how you hear is up to you.” We could also read this tagline as a fitting description of the new challenges in the Age of Attention as a whole.

Bose now sells a product that allows the user to block out all sounds excepts the ones coming from their desired source.

The increasing rate of technological change further amplifies these challenges of attention and self-regulation. Historically, new forms of media took years, if not generations, to be adopted, analysed and adapted to. Today, however, new technologies can arrive on the scene and rapidly scale to millions of users in the course of months or even days.

The constant stream of new products this unleashes – along with the ongoing optimisation of features within products already in use – can result in a situation in which users are in a constant state of learning and adaptation to new interaction dynamics, familiar enough with their technologies to operate them, but never so fully in control that they can prevent the technologies from operating on them in unexpected or undesirable ways. This keeps us living on what I sometimes call a “treadmill of incompetence”.

There is an alternative





What do you pay when you pay attention? You pay with all the things you could have attended to, but didn’t: all the goals you didn’t pursue, all the actions you didn’t take and all the possible yous you could have been, had you attended to those other things. Attention is paid in possible futures foregone. You pay for that extra Game of Thrones episode with the heart-to-heart talk you could have had with your anxious child. You pay for that extra hour on social media with the sleep you didn’t get and the fresh feeling you didn’t have the next morning. You pay for giving in to that outrage-inducing piece of clickbait about that politician you hate with the patience and empathy it took from you and your anger at yourself for allowing yourself to take the bait in the first place.

We pay attention with the lives we might have lived. When we consider the opportunity costs in this wider view, the question of “attention” extends far beyond the next turn in your life’s GPS: it encompasses all the turns and their relations, the nature of your destination, the specific way you want to get there, why you’re going there and also your ability to ask any of these questions in the first place. In this view, the question of attention becomes the question of having the freedom to navigate your life in the way you want, across all scales of the human experience.

But I also knew this wasn’t just about me – my freedom, my attention, my deep distractions, my frustrated goals. Because when most people in society use your product, you aren’t just designing users – you’re designing society. But if all of society were to become as distracted in this new, deep way as I was starting to feel, what would that mean? What would be the implications for our shared interests, our common purposes, our collective identities, our politics?

James Williams, the author. Photograph: Nine Dots Prize

Some threats to freedom we recognise immediately; others take time to reveal themselves for what they are. For too long, we’ve minimised the personal and political threats of this intelligent, adversarial persuasion as mere “distraction” or minor annoyance. In the short term, these challenges can indeed frustrate our ability to do the things we want to do. In the longer term, however, they can make it harder for us to live the lives we want to live or, even worse, undermine fundamental capacities such as reflection and self-regulation, making it harder, in the words of philosopher Harry Frankfurt, to “want what we want to want”. Seen in this light, these new attentional adversaries threaten not only the success, but even the integrity of the human will, at both individual and collective levels.

I used to think there were no great political struggles left. The truly epic defences of freedom, I thought, had been fought and won by generations greater than my own, leaving to my time only the task of dutifully administering our hard-earned political inheritance. I was ready to live in a world where my own great struggles would take place primarily in the memories, the simulations of the old ones. I was prepared to find new ways of satisfying the need for struggle in the face of a world that no longer demanded it. I used to think the big political questions had more or less been answered.

How wrong I was. I now believe the liberation of human attention may be the defining moral and political struggle of our time. It is a first-order problem; its success is a prerequisite to the success of virtually all other struggles. We therefore have an obligation to rewire the system of intelligent, adversarial persuasion we have inherited before it rewires us.

Doing so requires hacking together new ways of talking and thinking about the problem, as well as summoning the courage necessary for advancing on it in inconvenient and unpopular ways. I know that this courage exists – the open question is whether, in this world of torrential distraction, it can still receive a proper hearing.

• James Williams is the author of Stand Out of Our Light: Freedom and Resistance in the Attention Economy (CUP)