It’s a moral dilemma

Five years hence: You’re driving — or, more accurately, being driven in — your fully autonomous car. You — or it — is winding up a mountain road, craggy granite wall face to your immediate left, glorious picturesque cliff to your right. Your laptop is 8G Wi-Fi’ed to the office. There’s a cup of steaming java in the heated cup holder. All is right in your world.

You come around a blind turn only to find a school bus stopped on the road. It’s old, no web connectivity to warn your connected car of the impending doom. It’s blocking both lanes. With no time to stop, the choices are stark and stomach-churning — either crash into a school bus full of kids or drive over the cliff.

Now, if a human being were at the wheel, the decision would be pure instinct and, being instinctual, there’s absolutely no way to tell what the choice might be. But a computer, on the other hand, is nothing if not consistent. More importantly, unlike a human having to make a spur-of-the-moment choice, a computerized car would have to be programmed ahead of time to make these life-and-death decisions.

Which brings us to the crux of a looming moral dilemma: Who exactly will make the decision as to how the computer-controlled car will differentiate between the greater good (saving the kids) or saving its paying customer (you in the back seat)? It’s hard to believe a lowly software engineer will get to make such a far-reaching determination. Will there be an ethics officer in every automaker’s engineering department to take responsibility for their cars’ actions? Or will the government set an industry-wide policy as to who lives and who dies?

Dismiss this conjecture as Libertarian slippery-slope craziness all you like, but two current or former automaker board members/senior executives have commented to Yours Truly that, after the legal question — who is responsible for a car with no driver behind the wheel? — programming their products for just such occurrences is the second-largest impediment to the completely autonomous car.

It’s an ethical dilemma

More than a few prognosticators — Yours Truly included — have worried that connectivity and robots driving our cars might lead to a Terminator 3: Rise of the Machines-like world of government intrusion and Skynet takeover. What was once science fiction schlock is now potential reality, with networked cars, trucks and buses determining that the best way to improve traffic flow would be to kill all humans, especially pedestrians. Usually, this dilemma is split on purely idealistic lines: Libertarians decrying the loss of personal freedoms with insurance companies, Silicon Valley and automakers forming an unholy alliance on the care and control side of the equation.

Enter Elon Musk who, as CEO of Tesla, has boasted that his company’s Model S will be the first self-driving car offered in North America. The thing is that Mr. Tesla also recently admitted to donating $10 million US to the Future of Life, a non-profit organization that works to mitigate “the risks facing humanity” because “technology has given life the opportunity to flourish like never before … or to self-destruct.” I’m pretty sure this is what they call hedging your bets.

It’s a legal question

The question of who is legally responsible for an autonomous car — the driver not behind the wheel or the manufacturer of the car — has been well-documented as the major roadblock to introducing fully autonomous cars (Level 4 in U.S. National Highway Traffic Safety Administration parlance) as opposed to semi-autonomous (Level 3 being defined as Limited Self-Driving Automation, which requires a driver to remain behind the wheel). What’s so very interesting is who is promoting semi-autonomy versus total self-driving.

Automakers, on one hand, are universal in their insistence there will always be a driver behind the wheel. Audi, despite showing a totally driverless RS 7 circulating the Hockenheimring racetrack, is insisting the driver is in charge. Ford recently announced it would definitely not be the first to introduce the totally autonomous car. The rest of the industry is similarly dictating that a human be behind the wheel. Google, on the other hand, claims its future cars will need no driver; indeed, it would prefer there be no steering wheel at all.

Why the big difference? Well, far be it for me to resort to reductivism, but Audi (unintended acceleration in 1986), Ford (exploding Pintos in 1977 and rolling-over Explorers in 2000), and Toyota (run-on Priuses in 2009) as well as other automakers have all tasted the lash of the American tort system. Google, on the other hand, has yet to enjoy the attentions of an automotive class action suit. Few are arguing that computer-controlled cars are safer; it’s just that no one with any experience in American civil justice wants to take responsibility.

The safety question

Or not. According to the University of Michigan’s Transportation Research Institute, our assumption that self-driving cars are inherently safer may be exaggerated. Its “Road Safety With Self-Driving Vehicles: General Limitations And Road Sharing With Conventional Vehicles” study concluded — Volvo take note — that “the expectation of zero fatalities with self-driving cars is not realistic.” And, in addressing the supposed superiority of digital (the car’s computer) over analog (that would be you and me), Michael Sivak, co-author of the report, says, “It is not a foregone conclusion that a self-driving vehicle would ever perform more safely than an experienced, middle-aged driver.” Worse yet, the paper also concludes, “during the transition period when conventional and self-driving vehicles would share the road, safety might actually worsen, at least for the conventional vehicles.”

Enter, stage left, one school bus and a self-driving car.

And maybe someone’s got their priorities all wrong

While the world frets about the havoc wreaked by the errant automobile, The Economist quietly reported in January 2015 that guns would kill more people in the United States than automobile accidents. This is especially prevalent among under-24-year-olds, normally overrepresented in motor vehicle fatalities thanks to a predisposition to driving while drunk and/or distracted. Indeed, while gun deaths for 14-to-24-year-olds have held steady, according to the Centre for American Progress, automobile fatalities for the same age group have dropped dramatically in the last decade. As The Economist notes, automobiles are becoming immensely safer, saying that legislation — such as the National Highway Traffic Safety Administration’s recent call for mandatory rear-view cameras — has forced automakers to implement technologies that prevent accidents. On the other hand, the magazine says, “safety features on firearms — such as smart guns unlocked by an owner’s thumbprint or a radio-frequency encryption — are opposed by the National Rifle Association” and unlikely to ever become law.