



We talk about self-driving cars and advanced artificial intelligence (AI) on an regular basis here at. At this stage in the game, self-driving cars are in fact a reality and currently roaming streets in and around Silicon Valley . These are, of course, prototype vehicles that still have a human driver behind the wheel just in case an unforeseen hazard presents itself that would otherwise stymie the AI that normally drives the vehicle.

While the AI present in today’s experimental self-driving cars can navigate city streets, change lanes, avoid accidents and are for the most part fairly competent “drivers”, what happens when it comes to an “us versus them” scenario? What if a self-driving car is presented with no-win situation — no matter what the outcome of a collision, someone will likely die? Does the self-driving car protect its passengers at all costs with no regard for the lives of others, or should the car instead put its passengers in harm’s way to avoid a higher number of casualties that could result from a collision with pedestrians or other motorists?

That’s the subject of a new study published in Science, entitled, “The Social Dilemma of Autonomous Vehicles.” 1,928 participants were surveyed on a number of scenarios in which a self-driving car is faced with a moral dilemma that would result in the death of one or more people. The survey results showed that people overwhelmingly decided that self-driving cars should take a “utilitarian approach” in which casualties are minimized, even it means that passengers within the car must have their lives sacrificed for the greater good. So if there is just one passenger aboard a car, and the lives of 10 pedestrians are at stake, the survey participants were perfectly fine with a self-driving car “killing” its passenger to save many more lives in return.

But on the flip side, these same participants said that if they were shopping for a car to purchase or were a passenger, they would prefer to be within a vehicle that would protect their lives by any means necessary. In other words, everything is fine and dandy when you’re dictating what other people drive, but when your own butt is on the line, you want the safety of yourself and your passengers held in the higher regard. Participants also balked at the notion of the government stepping in to regulate the “morality brain” of self-driving cars.







MIT Media Lab has developed a complimentary online “Moral Machine” game that allows you to walk through a number of scenarios of who lives and who dies when a self-driving vehicle has to make a tough call.





In one scenario, a car has a choice to plow straight ahead, mowing down a woman, a boy, and a girl that are crossing the road illegally on a red signal. On the other hand, the car could swerve into the adjacent lane, killing an elderly woman, a male doctor and a homeless person that are crossing the road lawfully, abiding by the green signal. Which group of people deserves to live? There are a number of situations like these that you can click through.

With this all being said, we can’t help but recall a rather poignant scene from the movie I, Robot, in which Detective Spooner (played by Will Smith) recalls a story of how a robot saved his life in a car crash.

The vehicle he was traveling in and another car collided and careened into a body of water. A robot, that just so happened to be passing by, jumped into the water looking for survivors. The robot determined that Spooner had a much higher chance for survival than a child that was in the other vehicle — so his life was spared, while the child was left to drown.

I, Robot may just be a movie and these are the kinds of decisions that today might seem like science fiction. However, in the future, a harsh AI reality may whittle the worth of our very existence down to simple, unemotional percentages in computer’s brain.