Nicholas Carr | The Glass Cage: Automation and Us | October 2014 | 15 minutes (3,831 words)

The following is an excerpt from Nicholas Carr‘s new book, The Glass Cage. Our thanks to Carr for sharing this piece with the Longreads community.

* * *

Back in the 1990s, just as the dot-com bubble was beginning to inflate, there was much excited talk about “ubiquitous computing.” Soon, pundits assured us, microchips would be everywhere—embedded in factory machinery and warehouse shelving, affixed to the walls of offices and homes, installed in consumer goods and stitched into clothing, even swimming around in our bodies. Equipped with sensors and transceivers, the tiny computers would measure every variable imaginable, from metal fatigue to soil temperature to blood sugar, and they’d send their readings, via the internet, to data-processing centers, where bigger computers would crunch the numbers and output instructions for keeping everything in spec and in sync. Computing would be pervasive. Our lives would be automated.

One of the main sources of the hype was Xerox PARC, the fabled Silicon Valley research lab where Steve Jobs found the inspiration for the Macintosh. PARC’s engineers and information scientists published a series of papers portraying a future in which computers would be so deeply woven into “the fabric of everyday life” that they’d be “indistinguishable from it.” We would no longer even notice all the computations going on around us. We’d be so saturated with data, so catered to by software, that, instead of experiencing the anxiety of information overload, we’d feel “encalmed.” It sounded idyllic.

The excitement about ubiquitous computing proved premature. The technology of the 1990s was not up to making the world machine-readable, and after the dot-com crash, investors were in no mood to bankroll the installation of expensive microchips and sensors everywhere. But much has changed in the succeeding fifteen years. The economic equations are different now. The price of computing gear has fallen sharply, as has the cost of high-speed data transmission. Companies like Amazon, Google, and Microsoft have turned data processing into a utility. They’ve built a cloud-computing grid that allows vast amounts of information to be collected and processed at efficient centralized plants and then fed into apps running on smartphones or into the control circuits of machines. Manufacturers are spending billions of dollars to outfit factories with network-connected sensors, as technology giants like GE and IBM promote the creation of an “internet of things.” Computers are pretty much omnipresent now, and even the faintest of the world’s twitches and tremblings are being recorded as streams of binary digits. We may not be encalmed, but we are data-saturated. The PARC researchers are starting to look like prophets.

* * *

The Age of Automation

There’s a big difference between a set of tools and an infrastructure. The Industrial Revolution gained its full force only after its operational assumptions were built into expansive systems and networks. The construction of the railroads in the middle of the nineteenth century enlarged the markets companies could serve, providing the impetus for mechanized mass production. The creation of the electric grid a few decades later opened the way for factory assembly lines and made all sorts of home appliances feasible and affordable. These new networks of transport and power, together with the telegraph, telephone, and broadcasting systems that arose alongside them, gave society a different character. They altered the way people thought about work, entertainment, travel, education, even the organization of communities and families. They transformed the pace and texture of life in ways that went well beyond what steam-powered factory machines had done.

The historian Thomas Hughes, in reviewing the arrival of the electric grid in his book Networks of Power, described how first the engineering culture, then the business culture, and finally the general culture shaped themselves to the new system. “Men and institutions developed characteristics that suited them to the characteristics of the technology,” he wrote. “And the systematic interaction of men, ideas, and institutions, both technical and nontechnical, led to the development of a supersystem—a sociotechnical one—with mass movement and direction.” It was at this point that what Hughes termed “technological momentum” took hold, both for the power industry and for the modes of production and living it supported. “The universal system gathered a conservative momentum. Its growth generally was steady, and change became a diversification of function.” Progress had found its groove.

We’ve reached a similar juncture in the history of automation. Society is adapting to the universal computing infrastructure—more quickly than it adapted to the electric grid—and a new status quo is taking shape. The assumptions underlying industrial operations have already changed. “Business processes that once took place among human beings are now being executed electronically,” explains W. Brian Arthur, an economist and technology theorist at the Santa Fe Institute. “They are taking place in an unseen domain that is strictly digital.” As an example, he points to freight shipping. Not long ago, coordinating a shipment of cargo across national borders required legions of clipboard-wielding functionaries. Now, it’s handled by computers. Commerce of all sorts is increasingly managed through, as Arthur puts it, “a huge conversation conducted entirely among machines.” To be in business is to have networked computers capable of taking part in that conversation. Any sizable company has little choice but to automate and then automate some more. It has to redesign its work flows and its products to allow for ever-greater computer monitoring and control, and it has to restrict the involvement of people in its supply and production processes. People, after all, can’t keep up with computer chatter; they just slow down the conversation.

The science-fiction writer Arthur C. Clarke once asked, “Can the synthesis of man and machine ever be stable, or will the purely organic component become such a hindrance that it has to be discarded?” In the business world at least, no stability in the division of work between human and computer seems in the offing. The prevailing methods of computerized communication and coordination pretty much ensure that the role of people will go on shrinking. We’ve designed a system that discards us. If unemployment worsens in the years ahead, it may be more a result of our new, subterranean infrastructure of automation than of any particular installation of robots in factories or software applications in offices. The robots and applications are the visible flora of automation’s deep, extensive, and invasive root system.

That root system is also feeding automation’s spread into the broader culture. From the provision of government services to the tending of friendships and familial ties, society is reshaping itself to fit the contours of the new computing infrastructure. The infrastructure orchestrates the instantaneous data exchanges that make futuristic breakthroughs like self-driving cars possible. It provides the raw material for the predictive algorithms that now shape the decisions of individuals and groups. It underpins the automation of classrooms, libraries, hospitals, shops, churches, and homes—places traditionally associated with the human touch. It allows the NSA and other spy agencies, as well as crime syndicates and nosy corporations, to conduct surveillance on an unprecedented scale. It’s what has shunted so much of our public discourse and private conversation onto tiny screens. And it’s what gives our various computing devices the ability to guide us through the day, offering a steady stream of personalized alerts, instructions, and advice.

Once again, men and institutions are developing characteristics that suit them to the characteristics of the prevailing technology. Industrialization didn’t turn us into machines, and automation isn’t going to turn us into automatons. We’re not that simple. But automation’s spread is making our lives more programmatic. We have fewer opportunities to demonstrate our own resourcefulness and ingenuity, to display the self-reliance that was once considered the mainstay of character. As computers continue to shrink in size and grow in power, as they go from devices we hold in our hands to ones we wear on our bodies, that trend seems set to accelerate still further.

* * *

Through a Glass

It was a curious speech. The event was the 2013 TED conference, held in late February at the Long Beach Performing Arts Center near Los Angeles. The scruffy guy on stage, fidgeting uncomfortably and talking in a halting voice, was Sergey Brin, reputedly the more outgoing of Google’s two founders. He was there to deliver a marketing pitch for Glass, the company’s “head-mounted computer.” He began with a scornful critique of the smartphone, a device that Google, with its Android system, had helped push into the mainstream. Pulling his own phone from his pocket, Brin looked at it with disdain. Using a smartphone is “kind of emasculating,” he said. “You know, you’re standing around there, and you’re just like rubbing this featureless piece of glass.” In addition to being “socially isolating,” staring down at a screen weakens a person’s sensory engagement with the physical world, he suggested. “Is this what you were meant to do with your body?”

Brin went on to extol the benefits of Glass. The new device would provide a far superior “form factor” for personal computing, he said. By freeing people’s hands and allowing them to keep their head up and eyes forward, it would connect them with their surroundings. They’d rejoin the world. It had other advantages too. By putting a computer screen permanently within view, the computerized eyeglasses would allow Google, through its Google Now service and other tracking and personalization routines, to deliver pertinent information to people whenever the device sensed they required advice or assistance. The company would fulfill the greatest of its ambitions: to automate the flow of information into the mind. With Glass on your brow, Brin said, you would no longer have to search the web at all. You wouldn’t have to formulate queries or sort through results or follow trails of links. “You’d just have information come to you as you needed it.” To the computer’s omnipresence would be added omniscience.

Brin’s awkward presentation earned him the ridicule of technology bloggers. Still, he had a point. Smartphones enchant, but they also enervate. The human brain is incapable of concentrating on two things at once. Every glance or swipe at a touchscreen draws us away from our immediate surroundings. With a smartphone in hand, we become a little ghostly, wavering between worlds. People have always been distractible, of course. Minds wander. Attention drifts. But we’ve never carried on our person a tool that so insistently captivates our senses and divides our attention. By connecting us to a symbolic elsewhere, the smartphone, as Brin implied, exiles us from the here and now. We lose the power of presence.

Brin’s assurance that Glass would solve the problem was less convincing. No doubt there are times when having your hands free while consulting a computer or using a camera would be an advantage. But peering into a screen that floats in front of you requires no less an investment of attention than glancing at one held in your lap. It may require more. Research on pilots and drivers who use head-up displays reveals that when people look at text or graphics projected as an overlay on the environment, they become susceptible to “attentional tunneling.” Their focus narrows, their eyes fix on the display, and they become oblivious to everything else going on in their field of view.In one experiment, performed in a flight simulator, pilots using a head-up display during a landing took longer to see a large plane obstructing the runway than did pilots who had to glance down to check their instrument readings. Some of the pilots using the head-up display never even saw the plane sitting directly in front of them.

Wearable computers, whether sported on the head like Glass or on the wrist like the Apple Watch, are new, and their appeal remains unproven. They’ll have to overcome some big obstacles if they’re to gain wide popularity. Their features are at this point sparse, they look dorky, and their built-in cameras and sensors make a lot of people fear for their privacy. But, like other personal computers before them, they’ll improve quickly, and they’ll almost certainly morph into less obtrusive, more useful forms. The idea of wearing a computer may seem strange today, but in a few years it could be the norm.

Brin is mistaken, though, in suggesting that Glass and other such devices represent a break from computing’s past. They give the established technological momentum even more force. As smartphones made powerful, networked computers more portable and personable, they also made it possible for software companies to program many more aspects of our lives. Together with cheap, friendly apps, they allowed the cloud-computing infrastructure to be used to automate even the most mundane of chores. Computerized glasses and wristwatches further extend automation’s reach. They make it easier to receive turn-by-turn directions when walking or riding a bike, for instance, or to get algorithmically generated advice on where to grab your next meal or what clothes to put on for a night out. They also serve as personal monitors, allowing information about your location, thoughts, and health to be transmitted back to the cloud. That in turn provides software writers and entrepreneurs with yet more opportunities to automate the quotidian.

* * *

A Slipping of the Will

As we grow more reliant on applications and algorithms, we become less capable of acting without their aid—we experience skill tunneling as well as attentional tunneling. That makes the software more indispensable still. Automation breeds automation. With everyone expecting to manage their lives through screens, society naturally adapts its routines and procedures to fit the routines and procedures of the computer. What can’t be accomplished with software—what isn’t amenable to computation and hence resists automation—begins to seem dispensable.

The PARC researchers argued, back in the early 1990s, that we’d know computing had achieved ubiquity when we were no longer aware of its presence. Computers would be so thoroughly enmeshed in our lives that they’d be invisible to us. We’d “use them unconsciously to accomplish everyday tasks.” That seemed a pipe dream in the days when bulky PCs drew attention to themselves by freezing, crashing, or otherwise misbehaving at inopportune moments. It doesn’t seem like such a pipe dream anymore. Many computer companies and software houses now say they’re working to make their products invisible. “I am super excited about technologies that disappear completely,” declares Jack Dorsey, a prominent Silicon Valley entrepreneur. “We’re doing this with Twitter, and we’re doing this with [the online credit-card processor] Square.” Apple has promoted the iPad as a device that “gets out of the way.” Picking up on the theme, Google markets Glass as a means of “getting technology out of the way.”

The prospect of having a complicated technology fade into the background, so it can be employed with little effort or thought, can be as appealing to those who use it as to those who sell it. “When technology gets out of the way, we are liberated from it,” the New York Times columnist Nick Bilton has written. But it’s not that simple. You don’t just flip a switch to make a technology invisible. It disappears only after a slow process of cultural and personal acclimation. As we habituate ourselves to it, the technology comes to exert more power over us, not less. We may be oblivious to the constraints it imposes on our lives, but the constraints remain. As the French sociologist Bruno Latour points out, the invisibility of a familiar technology is “a kind of optical illusion.” It obscures the way we’ve refashioned ourselves to accommodate the technology. The tool that we originally used to fulfill some particular intention of our own begins to impose on us its intentions, or the intentions of its maker.

As software programs gain more sway over us—shaping the way we work, the information we see, the routes we travel, our interactions with others—they become a form of remote control. Unlike robots or drones, we have the freedom to reject the software’s instructions and suggestions. It’s difficult, though, to escape their influence. When we launch an app, we ask to be guided—we place ourselves in the machine’s care.

Look closely at Google Maps. When you’re traveling through a city and you consult the app, it gives you more than navigational tips; it gives you a way to think about cities. Embedded in the software is a philosophy of place, which reflects, among other things, Google’s commercial interests, the backgrounds and biases of its programmers, and the strengths and limitations of software in representing space. In 2013, the company rolled out a new version of Maps. Instead of providing you with the same representation of a city that everyone else sees, it generates a map that’s tailored to what Google perceives as your needs and desires, based on information the company has collected about you. The app will highlight nearby restaurants and other points of interest that friends in your social network have recommended. It will give you directions that reflect your past navigational choices. The views you see, the company says, are “unique to you, always adapting to the task you want to perform right this minute.”

That sounds appealing, but it’s limiting. Google filters out serendipity in favor of insularity. It douses the infectious messiness of a city with an algorithmic antiseptic. What is arguably the most important way of looking at a city, as a public space shared not just with your pals but with an enormously varied group of strangers, gets lost. “Google’s urbanism,” the technology critic Evgeny Morozov has written, “is that of someone who is trying to get to a shopping mall in their self-driving car. It’s profoundly utilitarian, even selfish in character, with little to no concern for how public space is experienced. In Google’s world, public space is just something that stands between your house and the well-reviewed restaurant that you are dying to get to.” Expedience trumps all.

Social networks push us to present ourselves in ways that conform to the interests and prejudices of the companies that run them. Facebook, through its Timeline and other documentary features, encourages its members to think of their public image as indistinguishable from their identity. It wants to lock them into a single, uniform “self” that persists throughout their lives, unfolding in a coherent narrative beginning in childhood and ending, one presumes, with death. This fits with its founder’s narrow conception of the self and its possibilities. “You have one identity,” Mark Zuckerberg has said. “The days of you having a different image for your work friends or co-workers and for the other people you know are probably coming to an end pretty quickly.” He even argues that “having two identities for yourself is an example of a lack of integrity.” That view, not surprisingly, dovetails with Facebook’s desire to package its members as neat and coherent sets of data for advertisers. It has the added benefit, for the company, of making concerns about personal privacy seem less valid. If having more than one identity indicates a lack of integrity, then a yearning to keep certain thoughts or activities out of public view suggests a weakness of character. But the conception of selfhood that Facebook imposes through its software can be stifling. The self is rarely fixed. It has a protean quality. It emerges through personal exploration, and it shifts with circumstances. That’s especially true in youth, when a person’s self-conception is fluid, subject to testing, experimentation, and revision. To be locked into an identity, particularly early in one’s life, may foreclose opportunities for personal growth and fulfillment.

Every piece of software contains such hidden assumptions. Search engines, in automating intellectual inquiry, give precedence to popularity and recency over diversity of opinion, rigor of argument, or quality of expression. Like all analytical programs, they have a bias toward criteria that lend themselves to statistical analysis, downplaying those that entail the exercise of taste or other subjective judgments. Recommendation engines, whether suggesting a movie or a potential love interest, cater to our established desires rather than challenging us with the new and unexpected. They assume we prefer custom to adventure, predictability to whimsy. The technologies of home automation, which allow things like lighting, heating, cooking, and entertainment to be meticulously programmed, impose an industrial mentality on domestic life. They subtly encourage people to adapt themselves to established routines and schedules, making homes more like workplaces.

* * *

Secret Code

If we don’t understand the commercial, political, and ethical motivations of the people writing our software, or the limitations inherent in automated data processing, we open ourselves to manipulation. We risk, as Latour suggests, replacing our own intentions with those of others, without even realizing that the swap has occurred. The more we habituate ourselves to the technology, the greater the risk grows.

It’s one thing for mechanical systems to become invisible, to fade from our view as we adapt ourselves to their presence. Even if we’re incapable of fixing a leaky faucet or troubleshooting a balky toilet, we tend to have a pretty good sense of what the plumbing in our homes does—and why. Most technologies that have become invisible to us through their ubiquity are like that. Their workings, and the assumptions and interests underlying their workings, are self-evident, or at least discernible. The technologies may have unintended effects, but they don’t have hidden agendas.

It’s a very different thing for information technologies to become invisible. Even when we’re conscious of their presence in our lives, computer systems are opaque to us. Software codes are hidden from our eyes, legally protected as trade secrets in many cases. Even if we could see them, few of us would be able to make sense of them. They’re written in languages we don’t understand. The data fed into algorithms is also concealed from us. We have little knowledge of how it is collected, what it’s used for, or who has access to it. Now that software and data are stored in the cloud, rather than on personal hard drives, we can’t even be sure when the workings of systems have changed. Revisions to popular programs are made all the time without our awareness. The application we used yesterday is probably not the application we use today.

The modern world has always been complicated. Fragmented into specialized domains of skill and knowledge, it rebuffs any attempt to comprehend it in its entirety. But now, to a degree far beyond anything we’ve experienced before, the complexity itself is hidden from us. It’s veiled behind the artfully contrived simplicity of the screen, the user-friendly interface. When an inscrutable technology becomes an invisible technology, we would be wise to be concerned. At that point, the technology’s assumptions and intentions have infiltrated our own desires and actions. We no longer know whether the software is aiding us or controlling us. We’re behind the wheel, but we can’t be sure who’s driving.

* * *

Excerpted from The Glass Cage: Automation and Us by Nicholas Carr. Copyright © 2014 by Nicholas Carr. With permission of the publisher, W. W. Norton & Company, Inc. All rights reserved.

Nicholas Carr writes on technology and culture. In addition to The Glass Cage, his books include The Shallows, The Big Switch, and Does IT Matter?

Photo: Kevin P Trovini, Flickr