Artificial intelligence is advancing at a rapid rate, and technology innovators are quickly transferring that technology to the vehicles that we drive every day. What first started with automatic cruise control has morphed to include additional driver assistance features like automatic braking, cross traffic alert, and lane-keeping (among others).

Manufacturers like Tesla have combined all of these technologies and a multitude of cameras and sensors around the vehicle to market systems like Autopilot, which can take over many primary vehicle controls while the driver still oversees the events that unfold before their eyes. The end game, however, is fully-autonomous vehicles — like Google’s self-driving cars — that don’t require human intervention to navigate city streets and highways.

Any time you have a computer taking over responsibilities that were previously handled by humans, and also involve putting human lives at risk, there are going to be questions about ethics. If an impending accident is unavoidable, who does a self-driving vehicle attempt to protect: its driver or the lives of pedestrians or other drivers on the road? How do you program this kind of reasoning into an autonomous car?



Well, the folks at Mercedes-Benz already know what they would do if faced with a precarious scenario that would see one if its drivers faced with a potentially life-ending accident. “If you know you can save at least one person, at least save that one. Save the one in the car,” said Christoph von Hugo in an interview with Car and Driver at the Paris Auto Show earlier this month. “If all you know for sure is that one death can be prevented, then that’s your first priority.

“You could sacrifice the car. You could, but then the people you’ve saved initially, you don’t know what happens to them after that in situations that are often very complex, so you save the ones you know you can save.”

In a study conducted by Science earlier this year, the overwhelming majority of people concluded that self-driving cars should take a “utilitarian approach” to accident avoidance in which the occupants of a vehicle should sacrifice their lives if it means that outside casualties are minimized. For example, if there were two passengers in a self-driving car, and the lives of 6 pedestrians were at stake in an unavoidable accident, the car would sacrifice its passengers.

On the flip side, these same people when asked what kind of self-driving vehicle they would purchase said that they would purchase a vehicle that put their lives and their passengers lives ahead of all others on the mean streets.

But perhaps in the not so distant future, where most vehicles are at least partially or fully computer-controlled, these ethical questions won’t even be necessary. “This moral question of whom to save: 99 percent of our engineering work is to prevent these situations from happening at all,” added Hugo. “We are working so our cars don’t drive into situations where that could happen and [will] drive away from potential situations where those decisions have to be made.”