The object, vaguely pink, sits on the shoulder of the freeway, slowly shimmering into view. Is it roadkill? A weird kind of sagebrush? No, wait, it’s … a puffy chunk of foam insulation! “The laser almost certainly got returns off of it,” says Chris Urmson, sitting behind the wheel of the Prius he is not driving. A note is made (FOD: foreign object or debris, lane 1) as we drive past, to help our computerized car understand the curious flotsam it has just seen.

It’s a Monday, midday, and we are heading north on California Highway 85 in a Google autonomous vehicle. In October 2010, when The New York Times reported that Google had built a fleet of self-driving cars that had already collectively traversed some 140,000 miles of California asphalt, it came as a shock, a terrestrial Sputnik. Now the cars, with their whirling rooftop laser arrays, are as familiar in the Bay Area as the company’s camera-crowned Street View vehicles. Indeed, the two are often confused, which is presumably why the words “self-driving car” have recently been plastered on this one’s driver-side door.

Anthony Levandowski, business lead on Google’s self-driving-car project, sits in the passenger seat, lanky and spectacled, wearing loud athletic shoes and clutching a MacBook Pro with a bumper sticker that reads “My other car drives itself.” Urmson, with the soft-spoken, intense mien of a roboticist who has debugged a Martian rover in the deserts of Chile, occupies the nominal “driver’s seat”—just one of the entities open to ontological inquiry this morning.

The Prius begins to seem like the Platonic ideal of a driver. It is better than human in just about every way.

The last time I was in a self-driving car—Stanford University’s “Junior,” at the 2008 World Congress on Intelligent Transportation Systems—the VW Passat went 25 miles per hour down two closed-off blocks. Its signal achievement seemed to be stopping for a stop sign at an otherwise unoccupied intersection. Now, just a few years later, we are driving close to 70 mph with no human involvement on a busy public highway—a stunning demonstration of just how quickly, and dramatically, the horizon of possibility is expanding. “This car can do 75 mph,” Urmson says. “It can track pedestrians and cyclists. It understands traffic lights. It can merge at highway speeds.” In short, after almost a hundred years in which driving has remained essentially unchanged, it has been completely transformed in just the past half decade.

Google isn’t the only company with driverless cars on the road. Indeed, just about every traditional automaker is developing its own self-driving model, peppering Silicon Valley with new R&D labs to work on the challenge. Last year, a BMW drove itself down the Autobahn, from Munich to Ingolstadt (“the home of Audi,” as BMW’s Dirk Rossberg told me at the company’s outpost in Mountain View, California). Audi sent an autonomous vehicle up Pikes Peak, while VW, in conjunction with Stanford, is building a successor to Junior. At the Tokyo Auto Show in November, Toyota unveiled its Prius AVOS (Automatic Vehicle Operation System), which can be summoned remotely. GM’s Alan Taub predicts that self-driving cars will be on the road by the decade’s end. Groups like the Society of Automotive Engineers have formed special committees to draft autonomous-vehicle standards. Even Neil Young is getting in on the act: Roboticist Paul Perrone has been busily revamping the rocker’s ’59 Lincoln Continental to drive itself. “Everyone thinks this is coming,” says Clifford Nass, director of Stanford’s Revs Program.

As we drive the Google car—or are driven by it—I watch the action unfold on the computer monitor mounted on the passenger side of the dashboard. It shows how the car is interpreting the world: lanes, signs, cars, speeds, distances, vectors. The rendering is nothing special—a lot of blocky wireframe that puts me in mind of Atari’s classic Battlezone. (The display is just one of a host of geeky details—to change lanes, for instance, the driver presses buttons marked Shift and Left on a keyboard near the monitor.) Yet it is absolutely fascinating, almost illicitly thrilling, to watch as the car not only plots and calculates the myriad movements of neighboring vehicles in the moment but also predicts where they will be in the future, like high-speed, mobile chess. Onscreen, the car is constantly “acquiring” targets, surrounding them in red boxes, tracing raster lines to and fro, a freeway version of John Madden’s Telestrator. “We’re analyzing and predicting the world 20 times a second,” Levandowski says.

A car comes speeding along the adjacent on-ramp. Do we accelerate or slow? It’s a moment that puzzles many human drivers. Our vehicle chooses to decelerate, but it can rethink that decision as more data comes in—if, for instance, the merging car brakes suddenly. The computer flags a car one lane over, maybe 30 feet in front of us, and slows imperceptibly. “We’re being held back by this guy because we don’t want to be in his blind spot,” Levandowski says. A bus suddenly looms next to us. “Even if you can drive in the center of the lane, down to the centimeter, that doesn’t mean it’s the safest route,” he says. And so the car drifts just a bit to the left to distance itself from the bus. “If you look at it, we’re not actually driving center, though we’re still not driving as bad as he is,” he says, pointing to a gray SUV ahead that’s straddling two lanes.

Levandowski has a point. I was briefly nervous when Urmson first took his hands off the wheel and a synthy woman’s voice announced coolly, “Autodrive.” But after a few minutes, the idea of a computer-driven car seemed much less terrifying than the panorama of indecision, BlackBerry-fumbling, rule-flouting, and other vagaries of the humans around us—including the weaving driver who struggles to film us as he passes.

Automatic Transition Self-driving cars may seem like science fiction, but most of the technology already exists. Indeed, over the past century we’ve gradually ceded our driving duties to automated systems.

Click the arrows to move through the timeline.





The Prius begins to seem like the Platonic ideal of a driver, against which all others fall short. It can think faster than any mortal driver. It can attend to more information, react more quickly to emergencies, and keep track of more complicated routes. It never panics. It never gets angry. It never even blinks. In short, it is better than human in just about every way.

Meanwhile, cruising along and freed from the acts of steering and braking, we’ve become backseat drivers of the entire traffic stream. I find myself imagining how much more smoothly the system would function if every car were like this one. Even at its most packed, only about 5 percent of a highway’s surface is covered by automobiles; if cars were hyperalert and algorithmically optimized, you could presumably squeeze many more of them onto the pavement. And then there’s the safety benefit. Traffic is the most dangerous thing that most of us ever encounter. From 2001 to 2009, American roads claimed 369,629 lives. And the culprit was not poorly lighted thoroughfares or faulty gas pedals but us—one landmark study cited “human errors” as the “definite or probable causes” of 93 percent of crashes.

Faced with the alternatives—that guy who cut us off without signaling, the mom nursing an Ambien hangover who’s drifting into the right lane, the Bluetooth jockey doing 90 mph—I welcome our new robotic Prius-driving overlords.

I’m not the first person to feel this way. In his 1940 book, Magic Motorways, futurist and streamlining guru Norman Bel Geddes, who created the General Motors pavilion at the 1939 World’s Fair, predicted big things for cars 20 years later: “These cars of 1960 and the highways on which they drive will have in them devices which will correct the faults of human beings as drivers. They will prevent the driver from committing errors. They will prevent his turning out into traffic except when he should. They will aid him in passing through intersections without slowing down or causing anyone else to do so and without endangering himself or others.”

In reality, the cars of 1960, poised at the apex of the delirious tail-fin phase of American automotive history, were lumbering gas hogs, not only bereft of any of Bel Geddes’ promised inventions but, beneath their shiny and muscular exteriors, dangerous chambers of sharply protruding dashboards and impaling steering columns. In the ensuing decades, cars became safer, but the people driving them did not. It is only now, well more than a century after the invention of the so-called automobile—an independently powered car—that we are finally developing a true automobile, a car that can drive itself.

As the news of Google’s self-driving car has spread, company Kremlinologists and auto industry wags alike have debated—with varying shades of anticipation or dread—whether it is a fun experiment or a serious challenge to the auto industry. Was the company just looking for an oxygen hit of innovation via its stockholder-straining techno-fantasist skunkworks, Google X? Is it just expressing the aw-shucks altruism the company is known for? The fanboy enthusiasms of higher management? Or does Google, as one report had it, have designs on building its own cars?

When I ask these questions around Google, the answers I get are polite, if bordering on exasperated. “Our clear statement,” Urmson says, “is we want to improve people’s lives by transforming mobility.” I get the sense that asking about business models is a bit rude. “Like a lot of things at Google, we want to figure out something big and important,” he continues, “and we’ll figure out the rest later.”

If Urmson’s team is taking a Googly approach to the auto industry, it’s also taking a Googly approach to driving. The company, Urmson notes, “is really all about processing big data,” and the road is just another data set to be mined. So Google isn’t teaching its computers how to drive. It’s collecting data—its cars have driven 200,000 miles in total, recording everything they see—and letting its algorithms figure out the rules on their own.

“If you read the DMV handbook on four-way stop signs, it’s easy,” Urmson says. “Whoever gets there first gets to go. If there are simultaneous arrivals, priority goes to the vehicle on the right.” But it rarely works that way. “People optimize stop signs,” he says. A polite robot vehicle, playing by the official driving rules, could be lost in a sea of aggressive humans. Instead, it needs to learn how people really drive. “This is the data-driven viewpoint,” says Sebastian Thrun, the Stanford roboticist who heads the self-driving project. “The data can make better rules. It’s very deep in the roots of almost everything Google does.” Urmson describes it as an attempt to “hack driving.”

It is, after all, “an accident that the car was invented before the computer,” Levandowski says. And so his team is trying to take the logic and power of a computer and build a car around it. The auto companies, Levandowski says, “have an existing thing that they’re improving incrementally, and they’re concerned about maintaining and growing market share. We’re thinking about driving as a blank slate: How would you address it if you had more freedom?”

But while Google wants to create, in essence, computers that drive, the auto industry has been trying to make its vehicles drive more like computers. Bolstered by increasingly powerful and affordable sensors, sophisticated algorithms, and Moore’s law, the world’s carmakers have been slowly redefining what it means to be a driver, encouraging us to offload everything from shifting gears to parallel parking. The automated car isn’t just around the corner—it’s here. The more interesting question isn’t when we will let go of the wheel completely but what form and purpose the car will have when we finally do.

The Ultimate Self-Driving Machine The next generation of gearheads won’t obsess over horsepower and torque; they’ll focus on things like radar range, communication latency, and pixel resolution. Here’s a look at the technology that will power the autonomous cars of the near future.—T.V.

1 Radar

High-end cars already bristle with radar, which can track nearby objects. For instance, Mercedes’ Distronic Plus, an accident-prevention system, includes units on the rear bumper that trigger an alert when they detect something in the car’s blind spot. 2 Lane-keeping

Windshield-mounted cameras recognize lane markings by spotting the contrast between the road surface and the boundary lines. If the vehicle leaves its lane unintentionally, brief vibrations of the steering wheel alert the driver. 3 LIDAR

Google employs Velodyne’s rooftop Light Detection and Ranging system, which uses 64 lasers, spinning at upwards of 900 rpm, to generate a point cloud that gives the car a 360-degree view. 4 Infrared Camera

Mercedes’ Night View assist uses two headlamps to beam invisible, nonreflective infrared light onto the road ahead. A windshield-mounted camera detects the IR signature and shows the illuminated image (with hazards highlighted) on the dashboard display. 5 Stereo Vision

Mercedes’ prototype system uses two windshield-mounted cameras to build a real-time 3-D image of the road ahead, spotting potential hazards like pedestrians and predicting where they are headed. 6 GPS/Inertial Measurement

A self-driver has to know where it’s going. Google uses a positioning system from Applanix, as well as its own mapping and GPS tech. 7 Wheel Encoder

Wheel-mounted sensors measure the velocity of the Google car as it maneuvers through traffic. Illustration: Thegreatergood.cc

No computer voice greets me when I press the ignition button of the new S-Class Mercedes parked in the front lot of Mercedes-Benz Research & Development in Palo Alto. This is no one-off prototype vehicle; it’s a production model selling in showrooms today. But as I spark the engine’s almost too perfectly modulated Teutonic growl and shift it into drive, I set in motion an unseen array of automation.

My driving, for example, is being constantly monitored by the car’s Attention Assistance function, which tracks more than 70 elements—from minor steering wheel movements to my use of turn signals—for signs of operator fatigue. After 20 minutes, the baseline is set and the car will flag subsequent deviations. If, while parsing the data, it senses that I’ve grown weary, a coffee cup icon pops up in the instrument cluster. (It’s up to me to pull over for the coffee.)

Attention assistance is just the beginning. R&D head Johann Jungwirth, who’s sitting next to me, ticks off with a salesman’s efficiency everything the car does for me: If it rains, the wipers activate. If I enter a tunnel, the headlights adjust their illumination. When a car in the neighboring lane creeps into my blind spot, a red triangle illuminates in my side mirror; if I try to change lanes, the icon flashes and beeps. If I drift out of my lane, the steering wheel rumbles gently. The Distronic Plus system—Mercedes’ brand of what’s called adaptive cruise control—maintains a steady following distance, braking automatically when the car ahead slows. And if I’m about to crash and haven’t heeded earlier warnings, the car will take me out of the loop entirely, activating its robotic braking system and even rolling up the windows. Why that last step? From the backseat, Luca Delgrossi, Mercedes’ Palo Alto director of driver assistance research, explains that it’s for the airbags: “You need to offer a surface for them to hit.”

It is, in short, a stealthily semiautonomous computer on wheels. “There are tens of thousands of processes running in parallel,” Jungwirth says. A car like this boasts upwards of 60 ECUs, or electronic control units, handling everything from automatic braking to automatic trunk opening. The technology trade magazine IEEE Spectrum notes that a premium-class automobile runs 100 million lines of computer code, more than Boeing’s new 787 Dreamliner. Intellectual property lawsuits, the bane of the technology industry and until recently a rarity in the car business, have been proliferating. As futurist Paul Saffo says, for a company like Mercedes nowadays, “the value add is the software and the computers. The wheels are primarily there to keep the computers from dragging on the ground.”

As I drive the Mercedes through Palo Alto, I am reminded of a horseback outing in South America a few weeks earlier. A novice, I was put on an experienced and quite tame horse. It knew the route we were on by heart, accelerating when it could smell the comfort of its own stable, and I had to make only occasional corrections. “Driving an automated car is very much like riding a horse,” says Donald Norman, author of The Design of Future Things and a consultant for BMW, among other automakers. “You can ride a horse with tight reins or loose reins. Loose reins means the horse is in control—but even when you’re in control, the horse is still doing the low-level guidance, stepping safely to avoid holes and obstacles.”

And so with loose reins I let the car do its work. It is effective, if slightly mechanistic. When a car ahead slows to turn, for example, the Mercedes fails to recognize that the vehicle will soon be out of my way, so we brake too much for my taste and then accelerate from a dead stop. The car’s lane-departure warning feature, which alerts drivers when they drift out of their lane, doesn’t work if the lane and edge markings are worn away—a common phenomenon in our infrastructure-challenged country. Then there are the technical issues that still plague sensors. Ice bedevils radar; snow challenges cameras. If, while cresting a hill, I were to encounter a car that was parked in the middle of the road, the Distronic Plus would treat it like any other stationary object—a building, a billboard, a mailbox—rather than a vehicle that might move soon. And radar doesn’t like bends. “If a curve is sharp,” Delgrossi says, “it cannot follow objects in front of you.”

That’s why Mercedes has been working on a system beyond radar: a “6-D” stereo-vision system, soon to be standard in the company’s top models. Delgrossi takes me back to the research center to show off a prototype of the technology, installed in a 2011 Mercedes CLS 550. We are joined by Alexander Barth and Gunther Krehl, two vision scientists who work for the auto manufacturer. We huddle behind the open trunk, nodding at the blinking array of off-the-shelf processors much as an earlier generation of gearheads would gather around an open hood to admire the engine.

As we start to drive, a screen mounted in the center console depicts a heat map of the street in front of us, as if the Predator were striding through Silicon Valley sprawl. The colors, largely red and green, depict distance, calculations made not via radar or laser but by an intricate stereo camera system that mimics human depth perception. “It’s based on the displacement of certain points between the left and the right image, and we know the geometry of the relative position of the camera,” Barth says. “So based on these images, we can triangulate a 3-D point and estimate the scene depth.” As we drive, the processing software is extracting “feature points”—a constellation of dots that outline each object—then tracking them in real time. This helps the car identify something that’s moving at the moment and also helps predict, as Krehl notes, “where that object should be in the next second.”

The stereo vision not only points out potential obstacles but can ID what those obstacles are. It can spot pedestrians and cyclists long before a driver likely would. It can distinguish between a stationary car and a mailbox. It can pick out potential dangers that are often obscured by humans’ “attentional blindness,” which can make us fail to notice even things we are looking directly at.

The only way to teach the car all this, of course, is to drive it—train the algorithms by cramming them with input, just as Google does with its cars. The current prototype for pedestrian-recognition software in the Mercedes, for instance, is based on 1.5 million (and climbing) samples of real and “virtual” pedestrians. And each driving culture demands its own version of the algorithms. In Germany, for example, Mercedes’ cars can alert drivers to the presence of speed limit signs. “In Europe, these signs are formed by a red circle, a figure not easily found in nature and that can be easily detected,” Delgrossi says. In the US, however, speed limit signs tend to be rectangular, which generates false positives with all kinds of other objects, like billboards and buildings. That’s part of the reason Delgrossi’s team is here: to teach the car to drive in the US. “Even the weather and lighting conditions can be different,” he says. “We have some algorithms optimized for German weather. They sometimes need a little tuning for California.”

Ensconced in the buttery leather driver’s seat, I am reminded of Emerson: “Things are in the saddle,” he wrote, “and ride mankind.” The truth is we have gradually been distancing our level of active engagement with the process of operating a car. We automated the shifting of gears. We went from manual steering to power steering and then finally to “drive-by-wire,” in which the mechanical connection between the steering wheel and the tires was replaced by a series of electrical impulses. We gave up paper maps for digital navigation systems. The hazards of parallel parking have been ironed out by ultrasonic sensors. This year, electronic stability control is standard on vehicles sold in the US for the same reason antilock brakes are standard in Europe: Its algorithms can perform better than humans in emergency maneuvering.

Each of these developments generated a brief period of resistance, which faded quickly as the new system began to seem natural. We do not feel as if we have lost something essential. On the contrary, in the same way that it would now feel strange to be in an elevator run by a human operator, it’s the absence of technology that begins to feel uncomfortable. Incrementally, more of the things that we think are innate to the driving experience—steering, braking, accelerating—will be out of our hands.

As the tech becomes more powerful, drivers will have to navigate a new uncanny valley.

In fact, taken altogether, these automatic systems already approach full-blown autonomy. Ricky Hudi, who heads electrical and electronic development for Audi and whom I met at the Frankfurt Auto Show, notes that the adaptive cruise-control system in his company’s A8 features a “stop and go” function for low-speed traffic. “All you have to do is extend this system with an image-recognition and laser-scanning system and an electrical steering system,” he says. “Then you’re pretty close to letting go of the wheel completely.” Indeed, Mercedes is nearing deployment of a “traffic jam assist” in which the car not only maintains its distance from the car in front of it—as with current adaptive cruise-control mechanisms—but steers as well.

There’s just one catch. As Ralf Herrtwich, head of Daimler’s global driver assistance and chassis systems research, tells me at the auto show, the driver is still legally required to maintain control of the vehicle. “Right now, if the driver does some steering motion, that is considered by the authorities as a significant hint that the driver is in the loop,” he says. And so another set of sensors will detect the driver’s hands on the wheel. Herrtwich calls this commanded maneuvering—holding the wheel as you feel it slip through your hands.

This brings up the most challenging obstacle on our road to the autonomous- driving future: managing the handoff. For as long as anyone, even Google, is willing to predict, cars will by necessity be semiautonomous; human drivers will still have to play some role. But figuring out what that role will be is complicated. Are we pilots or copilots? How far out of the loop can we be taken? “We need clear mental models of when you are better at something and when the car is better,” Stanford’s Nass says.

So far, automakers have kept their autonomous technology, like ABS, essentially invisible to drivers. But as the technologies become more powerful, drivers will have to navigate a new kind of uncanny valley, the potential sense of alienation in being piloted by a ghostly machine. “Many people describe this sensation,” Nass says. “Does the car even know I’m here?”

Meanwhile, the legal and liability landscape is essentially uncharted. “There are places where technology outpaces the law,” Google’s Levandowski says. “This is one area where it outpaces it by a lot.” In California, there is no law concerning self-driving cars. In 2011, Google helped Nevada draft the first legislation to allow autonomous cars to be driven legally on state highways. It’s the only time a motor vehicle department has had to deal with the issue.

Beyond bureaucracy, there are deeper legal questions. Ryan Calo, director for privacy and robotics at Stanford Law School’s Center for Internet and Society, which is studying the legal framework for quasi-autonomous vehicles, notes how active the liability landscape already is when it comes to cars’ safety features. “People sue over all kinds of stuff. People sue because some feature that was supposed to protect them didn’t. People sue because their car didn’t have a blind-spot warning when other cars at the same price point did.” Imagine the complexity we’ll have when cars drive themselves. Who will be responsible for their operation—the car companies or the drivers? What happens, for example, when a highway patrol officer pulls over a self-driving car? Who gets the ticket?

As a RAND report observed, even as automakers create more semiautonomous technologies, they “will want to preserve the social norm that crashes are primarily the moral and legal responsibility of the driver, both to minimize their own liability and to ensure safety.” Consider what happened to the remote-parking assistant BMW developed a few years ago for getting into narrow spots. “You push a button and the car goes in and parks itself” while the driver waits outside, says Donald Norman, the Design of Future Things author. When he asked BMW executives why he didn’t see it on the market, Norman says he was told, “The legal team wouldn’t let them go forward.”

The most slippery territory for autonomous vehicles, however, may be social and cultural. Do we want to give up the wheel? More than status, the car represents freedom. It’s a fundamental part of our character, liberty with the turn of a key—that ability, as the narrator of John Updike’s Rabbit, Run describes it, to “drive all night through the dawn through the morning through the noon park on a beach take off your shoes and fall asleep by the Gulf of Mexico.”

Google’s Urmson bristles at the idea that an autonomous car takes away your freedom. “It provides more freedom,” he says. “If you’re disabled, if you’ve lost the privilege of driving, you can’t get around in American society. You’re stuck.” Furthermore, he points out, driving is often kind of a drag. “Most of driving is not a car commercial,” he says. “The average American commutes 52 minutes a day, with the purpose of getting from point A to point B, not with the purpose of winding through the mountains and enjoying The Sound of Music.”

“The fact that you’re still driving is a bug,” Levandowski says, “not a feature.”

There may be signs that our traditional attachment to the car is already beginning to slacken. Since 2007, growth in US car ownership has steadily declined. Meanwhile, the annual number of miles that residents of developed countries drive each year has flatlined, even as per capita GDP has risen—challenging the long-held idea that cars were necessary for prosperity. In the US, the number of drivers under the age of 20 who hold licenses has been declining as well—from about 12 million in 1978 to just fewer than 10 million in 2009—while a recent Gartner report noted that nearly half of teenagers prefer an Internet connection to a car. (Only 15 percent of self-identified baby boomers said the same.)

Of course, automakers would rather not force drivers to choose between a smartphone and a car. Just down the road from the Facebooks and Pandoras of Silicon Valley, carmakers like BMW and VW are working to integrate every aspect of modern digital life into the driving experience. Mercedes, the first big company to set up shop in the Valley, in 1994, says it was also the first manufacturer to feature in-car Internet access, in 1998, and now boasts of being the first to offer “full Facebook integration.” One day, Mercedes’ Jungwirth says, the car will be as flexible and easy to upgrade as a smartphone—a big change for an industry that has had trouble integrating cutting-edge consumer electronics due to its sluggish product cycles. And customers have started to expect this kind of functionality, says Sven Beiker, who directs the Center for Automotive Research at Stanford. “Do I care if my engine has four valves? Not really,” he says. “But if it has a lot of megabits per second …”

This is not exactly comforting, not in a world where, as one landmark study by the Virginia Tech Transportation Institute found, driver distraction accounted for more than 80 percent of “safety-critical events.” Auto industry executives insist that they are operating within the telematics safety guidelines established by the Alliance of Automobile Manufacturers, an improvement over the current state of affairs, in which drivers simply use their smartphones. When it comes to Facebook, for example, “we have decided what’s important while driving,” Jungwirth says. “You won’t have your wall, your newsfeed, just because it’s too text heavy.” He keeps repeating, like a mantra, the company’s ethos: “Hands on the wheel, eyes on the road, mind on the road.”

But if your mind is also on Facebook, how much is left for traffic? A question kept nagging at me: Were all the increasingly sophisticated driver-assist technologies—ostensibly meant to guard against exterior hazards—being created as a kind of rearguard action to mitigate, or even enable, the increasingly rich experiences available to the driver inside the vehicle? Nevada’s texting ban, for instance, does not apply to self-driving cars. Maybe the problem is not that texting and Facebook are distracting us from driving. Maybe the problem is that driving distracts us from our digital lives.

In the 19th century, Karl Benz, creator of Mercedes (and the first person to legally operate an automobile on public roads), predicted the global market for his invention would be limited by the lack of qualified chauffeurs. It’s one of many examples of the way we often fail to anticipate the disruptive social changes that technological innovation can bring. Nobody knows how self-driving cars might change the fabric of our lives. “The moment you start putting intelligence in a car you start changing the essence of what a car is,” futurist Saffo says.

Consider, for instance, GM’s OnStar system. Initially a geolocation-equipped service intended to provide roadside assistance to stranded drivers—but capable, via its connected nature, of much more—it has become a kind of digital Trojan horse. Most recently, GM signed a partnership with RelayRides, a “neighbor-to-neighbor” car-sharing service that lets drivers rent out their vehicles when they are not in use. One simple but powerful feature of OnStar is that it offers remote unlocking of a vehicle. Integrated with RelayRides’ software, this allows easy access for renters, who don’t have to worry about exchanging keys with the owners. “I walk up to the car, send a command to OnStar with my phone, and it gets unlocked remotely,” explains RelayRides founder Shelby Clark. GM is the first major manufacturer to partner with the company, but Clark says that RelayRides has “had conversations with lots of automakers. Everyone is moving toward this concept of the networked car, the car as a platform.” (Indeed, there’s nary a big automaker that’s not partnering with car-sharing services—or building its own.) For Clark, it’s a no-brainer: “Why is my mobile phone more powerful than my car?”

This is the kind of notion that seems complete anathema to our current model of auto ownership. Sharing a rental car is one thing. But who wants to give a stranger access to the family chariot? Susan Shaheen, codirector of UC Berkeley’s Transportation Sustainability Research Center, notes that people are more willing to rent out their home than their car. But that’s largely because of concerns over insurance—a hurdle RelayRides has cleared.

Furthermore, as we gradually relinquish our driving duties, it’s conceivable that we may also give up our sense of ownership of the car itself. For example, as Matthew Crawford writes in Shop Class as Soulcraft, some Mercedes models come without a dipstick. “The burden of paying attention to his oil level he has outsourced to another,” Crawford writes: the technician, the dealer, the corporation, the shareholders. “There are now layers of collectivized absentee interest in your motor’s oil level, and no single person is responsible for it.”

Why do we embrace these, as Crawford calls them, “attractions of being disburdened of involvement with our own stuff?” Does anyone long for their teeming towers of CDs over the simplicity of Spotify? Similarly, we may well come to view the car less as an object to be owned than as a service to be streamed from the cloud. We already have the experience, through an app like Uber, of summoning a car service with our smartphone, then watching as it moves toward us on a Google map, like the progress bar of a download. The final leap here is to envision a self-driving car that can be commanded like an elevator. When a car can drive itself to our door whenever we want it, why own something that spends more than 90 percent of the time simply parked?

Still, it’s impossible to predict how the self-driving car would change our lives. On his Mac, Google’s Levandowski keeps a photo that epitomizes the 1950s vision of the self-driving car. In it, a family hunkers inside a massive tail-finned convertible. The automobile pilots down the highway while the family members (sans seat belts) play a board game, blissfully disengaged from the activity on the road. It’s a quaint idea, in any number of ways—for one thing, it’s unlikely we’ll use the free time generated by automation for leisure, let alone board games. But whatever vision we conjure today, it runs that same risk of retro-futurist pathos. When the self-driving future arrives, we’ll adapt to it not as something radical but as the mundane magic we’ve come to expect.

Tom Vanderbilt (tomvanderbiltnyc@gmail.com) is the author of Traffic: Why We Drive the Way We Do (and What It Says About Us).