The tough part here is designing the algorithms that will control these self-driving rides, and how to teach the artificial intelligence deal with unavoidable harm. Successfully doing so relies on a trio of what Science calls "incompatible objectives." Meaning, the algorithms must be consistent, not cause public outrage and not discourage buyers. It's tricky and raises the question of which lives are more important, those outside the vehicle or its passengers? When humans make split-second decisions, it's out of instinct and self-preservation -- not programming.

But if someone knowingly bought an autonomous vehicle that favored passengers over pedestrians, would they be held liable if a loss of public life occurred?

"I do not think concerns about very rare ethical issues of this sort [...] should paralyze the really groundbreaking leaps that e are making in this particular domain of technology, policy and conversations in liability, insurance and legal sectors, and consumer acceptance," assistant research scientist Anuj. K Pradhan, of UMTRI's Human Factors Group, tells The Verge.

Again, this is all extremely early, but it's for the best that the conversation is starting now, rather than, say, when we have a whole fleet of self-driving cars on the road. For more troubling morality questions, be sure to hit the source links below.