In 2012 the engineers working on Google’s self-driving car realized they had a problem. Early testers had agreed to always watch the road in case of emergencies, but many didn’t—and it put them at serious risk. This is what’s known as the handoff problem: how to alert and engage the distracted human when the computer falters. Google’s solution? Bypass the issue by building a vehicle that operates entirely on its own.

Over the next few years, several other makers of self-driving cars, companies like Ford, General Motors, and Volvo, de-emphasized or abandoned their efforts to crack the handoff problem, mirroring ­Google in their quest to go full robo. That’s why, in 2017, we’re closer than ever to being chauffeured around by machines, perfect drivers who won’t bother us with small talk.

If that sounds ideal, it is not. These companies have fundamentally misunderstood the role artificial intelligence should play in our lives. And before those fully autonomous cars arrive and are widely adopted, hundreds of thousands of lives will be lost that might have been saved.

Think, for starters, about where most self-driving cars are being deployed: major cities. That’s where the paying customers are. Fleets of driverless taxis are projected to deliver billions in annual profits by 2030. What’s more, operating within a city’s limited geographic area reduces the technological challenges. But more than half of the 35,000 annual traffic deaths in the US don’t occur on major city streets—they happen on rural roads and highways. And on those roads, where obstacles like random construction zones, surprise detours, or dramatic changes in weather can trip up robots, you need a human behind the wheel who can react. You need an effective handoff.

Yes, it’s hard. A car that relies on human backup must know things no car ever has before, like whether that human is paying attention, if they can safely take the wheel, and how to tell them to do so. And it has to accomplish all that in a few seconds. No wonder Google said see ya.

But it’s not impossible. At least one automaker has embraced the challenge: Audi. Today, the company debuted its latest version of the A8 sedan with a self-driving system that relies on the human for support.

Audi’s engineers and psychologists have spent years teaching the car to drive safely on the highway, but their real focus was on the human-machine interface that enables the tricky handoff. The A8 watches its human with a facial-recognition camera in the instrument cluster; its steering wheel knows when it’s being touched. If the car determines it needs help or senses that the human is not paying attention, it pesters them with visual and audio cues. If those don’t work, the car tightens the seatbelt and pumps the brakes. Still no response? The car will turn on its flashers, slow to a stop, and unlock the doors. Audi developed that sequence by subjecting hundreds of people on multiple continents to tests in simulators. The system can even be optimized for particular users—for example, drivers in China tend to prefer visual warnings over aural ones.

The A8 hasn’t proven itself yet, and Audi will never sell enough of the executive-­toting sedans to make much of a dent in traffic fatalities. But its human-­centered focus can serve as a model for other companies. Some have started: Cadillac’s Super Cruise feature uses a camera to make sure the human is ready to take control. IBM researchers recently patented an AI system to determine who should be in charge in any given situation. Tesla helps humans supervise its Autopilot with a display of what the car “sees.”

Decades from now, when fully autonomous vehicles are available everywhere, these stopgap measures won’t be necessary. But that doesn’t mean they should be ignored today. Even if saving as many lives as possible isn’t your goal—though it should be—the other major advantage to developing human-machine hybrids is that it builds a framework for answering an urgent question with implications far beyond cars: As increasingly intelligent machines come to life, how should they interact with humanity?

Robots that think and learn will infiltrate spheres from health care and criminal justice to finance and popular culture. If they are to make life truly better and not just more efficient, they must keep the humans they serve at top of mind. And the people creating this vision of the future must do the same.

A truly autonomous car won’t care if its passengers are watching the road. But it should know whether they’re uncomfortable, or stressed, or nervous. Because even when humans aren’t driving, they’re still what matters most.

Senior associate editor Alex Davies (@adavies47) writes about transportation for WIRED.

This article appears in the July issue. Subscribe now.