Experience is Not an Option: A Review of James Williams’ Stand Out of Our Light: Freedom and Resistance in the Attention Economy (2018) Shannon Foskett Follow Jul 14, 2018 · 13 min read

Imagine the following scenario: one weekday morning as you’re running late to work, you throw your bags in your six-month old smart car, give your thumbprint to a keyless ignition, and immediately begin to put it in reverse. But the user interface lights up with a message you haven’t seen before: something about identity fraud protection and your bank account. In your rush, you fail to see that the garage door hadn’t fully cleared (the smart sensing system would normally catch this) and you hit the door, denting it. Now it won’t open. With blood now rushing through your veins and a raised body temperature, you’re forced to pay attention to the software — but now the screen is frozen. You’re stuck. Your car is stuck, and now you won’t make it to work. But you’ll need to pay for a new garage door — if you can still access your bank account.

Outside of how realistic or not this scenario might sound, it captures the sentiments many of us have surely had in analogous situations involving technologies which failed in their function as facilitators of experience, and became a blunt force obstacle to it, instead. Examples abound: how about losing a device you used for two-factor authentication when you need it to access a cloud account which would have enabled you to locate the lost or stolen device, but, no matter — the cloud isn’t currently available, anyway.

There are too many ways in which the technologies of connection place us in the position of feeling like spectators of a fast-moving shell game involving the things that are most important in our lives. All too often, it feels like the only option is to surrender with a sigh, and sink back into the big-comfy-chair-equivalent of accepting some version of what Yale historian Timothy Snyder has called out as the “politics of inevitability” — in this context, the notion that technological development obeys an intrinsic logic, speed and direction over which we have neither expertise nor authority nor means to intervene — if it’s the newest, it must be the smartest; the best option; a solution we didn’t know we needed; and so on. Such assumptions (quite deep, often invisible) are also key factors behind what critic and VR pioneer Jaron Lanier has discussed as technological “lock-in,” which is as bad as it sounds (please read his work, if you haven’t).

Stand Out of Our Light — Freedom and Resistance In the Attention Economy (Cambridge University Press, 2018), the first book by Oxford-trained philosopher (and former Google strategist) James Williams, seeks to intervene in the “politics of inevitability” that seems to cloud technical media design, development and adoption. It’s an extremely well-considered, brilliant and eloquent plea — the latest in a lengthening trail of internet-era technology criticism reaching back to Nicholas Carr’s The Shallows: What the Internet is Doing to our Brains (2010) and in conversation with Tim Wu’s The Attention Merchants (2016) and The Distracted Mind (Adam Gazzaley and Larry D. Rosen, 2016). Taking specific aim at the digital technologies making increasing and coercive demands on our daily supply of attention (for starters), Stand Out of Our Light amplifies some of the humanist, anti-reductionist arguments in Jaron Lanier’s You Are Not a Gadget (2011) and provides an important follow-up to Adam Greenfield’s incisive, analytical sweep of the current landscape in Radical Technologies (2017). While the book’s focus might appear quotidian next to the proliferation of threats in the more publicized genres of privacy, automation or the dawn of superintelligence, don’t be fooled. As Williams cogently demonstrates, nothing less than the coherence of democratic will may be at stake in a global media environment optimized for increasing distraction, and aiming to extend its reach into the neurological register with brain-to-computer interfaces for more in-depth, direct social sharing.

Reading through the book, I felt a sense of reassurance, not only with respect to the welcome recognition of sharing one’s diagnosis with someone who has brought such clarity and depth to the phenomena, but also for further support of the view that the consequences ensuing from the rapidity of our digital onboarding shouldn’t be underestimated or dismissed as normative moralizing (e.g., consider the view of philosophers Thomas Metzinger and Michael Madary, as recently reported in The New Yorker (Joshua Rothman, April 2018), that embodied virtual reality can have profound and inscrutable effects on the “the very relationship we have to our own minds.” Based on controlled experimental research into the psychological effects of other-body embodiment, Metzinger advises that violence in virtual embodied environments should be prohibited — a familiar-sounding claim that others will disavow as moral panic). As such, this is probably an especially good book for the “we will be just fine” crowd. It’s definitely for anyone who hasn’t thought much in particular about this topic. And it’s a must for designers and developers. Philosophers of technology, and those working in technological ethics and policy, also ought to take special note.

The book’s inquisitional journey begins with the question, “what do we pay when we pay attention?” (45), reconciled with Herbert Simon’s oft-cited dictum that a wealth of information creates a poverty of attention (13). How do we pay with something we have less and less of? It becomes more costly. As such, the book provides something of a socioeconomic theory of attention, organized through three provisional frameworks: the attention of doing (daily functions); the attention of being (achieving life goals; self-directedness); and the attention of knowing (metacognition; contemplation; self-reflection). Our networked media ecology is increasingly ill-described in terms of information economy, or even “infotainment,” than it is about the exchange, or capture, of attention. On Williams’ view, the “design goal” of today’s technology is not to inform, but to induce (to cause you to perform some act of cognitive, affective or attentional agency, for example). Here he turns to philosopher J.S. Mill’s argument for the freedom of mind, which gives it the utmost importance, being the one on which all others, especially the freedom of speech, depend. Insofar as media design fails to honor such a foundational freedom, it undermines the basis of a free society and commits acts of epistemic injustice: using AI algorithms to optimize platforms for extreme forms of bias, longer YouTube viewing times, or the spread of “fake news.”

Perhaps the most important contribution the book makes is the step it takes beyond others in the subgenre: linking problems of attentional regulation with the corruption of the will (akrasia, referring to the weakness of will), Williams demonstrates that epistemic distraction dehumanizes individuals insofar as the will becomes too dysfunctional for the pursuit of one’s higher goals and values (80). Since the individual will grounds the collective will and the authority of politics (87), repairing the relationship between the attention and the will becomes an urgent collective and political task. Williams asks if there’s a natural limit to the crumbling of human attention, some “point of no return” after which we would be unable to fight back or regain full cognitive autonomy (107). You could think of this as the inverse complement to the singularity/ superintelligence hypothesis. As our networked digital technologies have become the medium for almost all sectors of daily life, he argues, they necessarily represent the “ground zero” of political struggle. Gifted in the selection of explanatory metaphors, Williams likens the attention economy to that of human trafficking — the social media market is based upon a coercive, one-way “harvesting of attention” (88) — and concludes that individual attention needs similar legislative protections as our organs do (here one can recall the studies showing that willpower, or the energy for self-discipline, is finite, and decreases throughout the day with each decision we make). Put differently, one can ask: “what forms of attitudinal and behavioral manipulation shall we consider to be acceptable business models?” (108). The point here is that democracies require the attentional and volitional health of their citizens, and that if that is squandered on “petty design goals” or worse, the fabric of democracy suffers in proportion.

It is not difficult to draw parallels here with recent events involving Facebook advertising and the use of user data for political persuasion in the run-up to the 2016 election. As Williams cautions, the competition between various media technologies to capture increasingly larger shares of the conscious lives of its users “will not cross any threshold of intolerability that forces us to act” (93), but is rather more like the proverbial pot of slowly heating water, or — to use a metaphor from Madeleine Albright’s Fascism: A Warning — an “incremental plucking of the chicken.” Giving support to the idea that these business models based upon the capture of attention and affect are built with no intrinsic braking mechanisms, Williams cites the social media giant’s plans, announced in 2017, for developing a brain-to-computer interface with which users can share thoughts and feelings directly from the “speech center” of the brain.

Given their indictments of the digital design industry for undermining capacities for reflection and self-regulation, as well as encouraging, or inducing, the pursuit of “petty, subhuman goals” (8), Williams’ arguments resonate with those of one of the few philosophers he doesn’t cite: Bernard Stiegler. Inasmuch as Stiegler charges the late modern, digital networks with the “industrialization of consciousness,” his views find agreement in the claim that over the long term, these technologies make it more difficult to “want what we want to want” (xi), as Williams puts it. As I understand Stiegler’s rather more complex set of arguments (responding in large part to Husserl and Heidegger, and which evolve in his later work), globalized digital technics in particular erodes away at our ability to reach that intersection of knowing, desiring and imagining known as noēsis and thus undermines our access to fully human forms of potential. It blocks this in part by interrupting our participation in semiosis, or the making of meaning (this is the issue of the “proletarianization of consciousness”). With standardization of tertiary memories (chronologies in the forms of news feeds, top 40 lists, year end reviews, and so on), the consciousness of the past, present and future often comes ready-made, in a form twice alienated from individual living experience (I’m taking some shortcuts here, but this is the gist of some key arguments in Technics and Time and Symbolic Misery). All this is to say that the level of Williams’ analysis is in good company. More attention needs to be paid to the construction and deployment of attention as such, beyond merely cognitive psychological frameworks and with respect to a broader “politics of consciousness,” as Stiegler would put it.

If the originality of this manifesto lies less in the strength and diagnostic precision of its analysis, then, it comes with the force of an impressive program of recommendations and suggestions for ethical guidance which Williams outlines in the penultimate chapter.

It is crucial, as he emphasizes, to remember that the web as we know it is only 10,000 days old.

Relative to so many of the world’s time-hardened institutions, traditions, and divisions, there ought to be plenty of systemic malleability left to make important changes that can align us more closely with the values and in the directions we want to grow. Nonetheless, Williams makes clear that what he has in mind, and what is desperately needed, is a genuine rebellion, or actual revolution, in which we fight for these freedoms of attention. To reiterate: the design goals of the various technologies of attention ought to be aligned with our own. This requires that the designers themselves need to care and give attention to the things which actually matter for individual well-being (which, to be sure, is not necessarily the same for everyone, but see Williams’ cautionary defense of a “minimum viable mind”).

Williams offers four key areas for intervention, each with several calls for action, in which changes can be made that help recalibrate the collective technological compass in order to keep the interests and freedoms of individuals at heart. Some of these are addressed to basic issues of policy and corporate or finance structures which place more of the burden for the protection of attention on the business models themselves. For example, this might involve giving incentives to companies and startups to incorporate social good goals into their financial models, and disincentivizing models that merely pursue “the mere capture and exploitation of user attention” (115). It might involve creating policies that address an ethics of the management of attention and not just the management of information (choices about data collection, sharing and storage).

The most compelling injunctions form the philosophical backbone for an ethics of attention. Both new concepts and new language are needed for getting beyond the superficial vocabulary of “distraction” so as to capture more of the relations between attention and the human will (a multi-faceted type of meta-introspection, one might say, that includes capacities for self-directedness; discipline; self-regulation; persistence; courage and more) (112). The various aspects of will, its fragility and the energy it requires and consumes in acting, would be acknowledged and respected in an empathetic metatechnological discourse, eschewing words which reduce humans to disembodied “eyeballs,” “users” and “funnels.” Conceptual structures — like that of ethical progress and ethically salient criteria can be reintroduced, emphasized and created; for example, in ethical evaluations of digital technologies in terms of their “goodness of fit” to individuals’ needs and goals. Williams proposes categories such as “Seductive Technologies” (the worst offenders, which use blatant techniques for coercion and compulsion). He also advances a compelling standpoint from attentional labor, suggesting that if “we’re not getting sufficient value for our attentional labor, and the conditions of that labor are unacceptable — we could conceive of the necessary corrective as a sort of “labor union” for the workers of the attention economy, which is to say, all of us” (123).

Most provocatively, perhaps, Williams forwards the challenge of instantiating a designer’s oath as a professional vehicle for standardizing ethical guidelines.

Similar to other oaths or commitments (in finance and medicine, for example), a designer’s oath would recognize the need for accountability of conduct in a field that has outsized impact upon everyone’s life. Several similarly-inspired calls have been put forth recently, such as the “code of ethics” for virtual embodied reality written by Mel Slater, Thomas Metzinger and Michael Madary. Among Williams’ notable proposals for inclusion: the promise to respect the attention of the users of a product, and to prohibit the use of their own weaknesses against them.

While his critique is remarkably thorough, and even if at times seems idealistic or unlikely to see implementation (the“politics of inevitability” again), there are more than enough additional pressing issues and points in need of further clarification and debate for others to jump in and add to this conversation. Consider the claim that users need to have a real, significant say in design (123). This should be taken on a very broad social level — well beyond that of the potential users of an individual application or device. In the rollout of self-driving cars, for example, or Facebook’s attempt at reading between the lines for suicidal ideation, were you consulted or informed about critical objections or options for implementation, even though you could be directly and adversely affected by these technologies? How about racial bias in facial recognition algorithms? One of the few oversights of the book’s outlines for resistance is that it stops short of addressing diversity in design — part of William’s desire, perhaps, to not “place blame” (no one goes into design intending to cause social harms, as he states). But we need critical, intersectional intervention from multiple frameworks, of race, gender and sexuality and economic inequality, with respect to the distribution and dissemination of design ideas and implementation. We need to foster and promote the work of researchers such as Joy Buolamwini, founder of the Algorithmic Justice League; human rights-focused Tactical Technology Collective and groups like the Campaign for a Commercial-Free Childhood. Who has a voice, and who is heard, in the sway of technological media development? If codes of ethics and designer’s oaths are on the horizon, we need genuinely diverse representation and input at all levels in the development of these, as well.

One point of specific questioning that I would raise follows from one of the proposals in the hypothetical designer’s oath. Williams suggests that we might have a moral obligation to engage in more measurement — “of the right things” — for example, variables indexing well-being, levels of wakefulness or concentration, our higher-level goals and other things we value (121–122). His reasoning is that in some cases it might be causing harm to an individual if certain things weren’t measured which could have been (an example might be someone nodding off while operating machinery in the workplace; or signals that someone might be part of a vulnerable group). My response to this is to ask how this (algorithms text-mining users’ posts for signs of suicidal ideation, or anything else, for that matter) is different from examples of what has already been taken as an infringement of rights, such as Facebook, Android or Alexa covertly detecting sounds and emotions through a device’s microphone or camera? Measurement, or the quantification of so many human values and relations into data points and the standardization and commercialization which follows thereupon, has been a large contributor to many of the adversarial aspects of design pointed out by the book. As Williams himself says, these are so many answers to questions we never even asked. Ought we really to just add more measurement, as a gambler doubles down, seeking to reverse her losses? Increasing the scope of what our networked technologies “perceive” and record about us in order to further offload aspects of cognitive tasks like self-monitoring sounds like further entrenchment of external constraints on our internal guidance systems (our “starlight,” to use his metaphor), which was identified as part of the original problem.

On sum, Stand Out of Our Light offers an immediate fix at least for that aspect of our dilemma as allegorized in a tale from the fourth-century BC at the opening of the book. When the heckler-philosopher Diogenes, who has been living on the streets since his exile from Sinope, is given an offer he can’t refuse from Alexander the Great — namely, anything he wants — Diogenes responds by yelling back: demanding only that Alexander stand out of the way of his light. If the game of late capitalist technological prowess is to offer the perception of a limitless horizon of possibility, or the transhumanist illusion of transcending various forms of finitude and limitation, it might be a lot like Alexander coming with this promise of total wish fulfillment. We would do well to ask what is being obscured, blocked, ignored, or denied in the offer. This book makes the silhouette visible.

A few days after I finished it, it struck me that within the same two-week period (early June 2018), I had come across three references, from quite different contexts, to Milton Mayer’s They Thought They Were Free (University of Chicago Press, 1955). One of these was in this book. Williams’ points seemed to be condensed and illuminated by the reference, which itself more directly reflected events in the wider political climate. I’ve since begun to ask new questions, or familiar questions in new ways, about the outlines these new technologies give to our world — instead of switching between the black and white values shaping our silhouette, foregrounding the illusion that we have only one choice or another, we could attend to the very shape on offer itself.