3.1 A duty to switch over to self‐driving cars?

We have been discussing how to deal with potential crashes involving self‐driving cars. But the promise of self‐driving cars is supposed to be that they will help us, as much as possible, to avoid crashes. Let us therefore look at the bigger picture and consider arguments in favor of developing and introducing self‐driving cars related to their safety potential.

Let us first return briefly to Hevelke and Nida‐Rümelin's discussion. When they argue that we should not hold car manufacturers responsible for crashes because of the importance of developing self‐driving cars, their reasoning suggests the following argument. There is a moral imperative to seek ways of making traffic with automobiles safer; self‐driving cars promise to be much safer than conventional cars; and so therefore there is a moral imperative to develop and introduce self‐driving cars into society (Hevelke & Nida‐Rümelin, 2015). Similarly, Elon Musk of Tesla – in an interview about Tesla's “autopilot” feature – predicted and seemed to approve of the idea that, in the future, conventional cars will be forbidden, and only automated cars will be permitted (Bloomberg, 2014). The implied argument in that interview was that the safety potential of self‐driving cars creates an imperative for people to switch over to such cars and to stop using conventional cars altogether. Robert Sparrow and Mark Howard make virtually the same argument. In their version, self‐driving cars should be forbidden so long as they are less safe than regular cars. But once self‐driving cars become safer than regular cars, it is regular cars that should be forbidden (Sparrow & Howard, 2017). Moreover, the same general type of argument has also been made within the domain of public health research, by Janet Fleetwood. She writes that automated driving is an “incredible invention” likely to “transform transportation” while “saving lives.” For this reason, public health leaders “should welcome” this development (Fleetwood, 2017, p. 536).

I myself – together with Jilles Smids – have put forward a related argument about the situation in which there are both self‐driving cars and conventional cars available. This type of “mixed traffic” is perhaps the most realistic situation to anticipate for the coming years ahead. On the supposition that self‐driving cars would actually be safer than conventional cars, our suggestion was as follows: People have a duty to either switch over to the safer alternative, namely, autonomous cars, or to use or accept added safety precautions when using the less safe alternative, namely, conventional cars (Nyholm & Smids, forthcoming).

This type of reasoning could help to reopen otherwise mostly dormant debates about safety technologies such as speed limiters and alcohol locks in regular cars. There are contributions to the applied ethics literature that present moral arguments in favor of such technologies, such as papers by Jilles Smids (Smids, 2018) and by Kalle Grill and Jessica Nihlén‐Fahlquist (Grill & Nihlén Fahlquist, 2012). But on the whole, such technologies – or ways of mitigating traffic‐risks more generally – have not received a great deal of attention within practical ethics. However, the introduction of self‐driving cars might put some pressure on people to either try to make their conventional cars safer or switch over to self‐driving cars instead. Technologies that could help to make their conventional cars safer would precisely be things such as speed limiters and alcohol locks.

This choice between automated cars and conventional cars is another thing that can be related back to the comparison between self‐driving cars and military robots. Military robots have also sometimes been lauded for their supposed life‐saving potential. It is thought that they can save the lives of soldiers whose lives are at risk on the battlefield and potentially also the lives of some innocent non‐combatants. Military robots have also been said to have a potential to better conform to ethics and the law than human soldiers sometimes do. In particular, Ronald Arkin has argued that if military robots can be programmed to follow the laws of war and rules of engagement in a much stricter way than human soldiers do, armies will have a better chance of preventing war crimes. That, Arkin argues, is a good reason to switch over from deploying human soldiers to instead deploying more reliably law‐abiding military robots (Arkin, 2010).

How good is this particular analogy between self‐driving cars and military robots? What further ethical issues might reflecting more on this apparent analogy help to bring up? In considering this, Purves and colleagues discuss an objection to military robots that they worry would potentially apply to self‐driving cars as well. But, ultimately, they go on to argue that it does not apply to self‐driving cars in the same way as it does to military robots. So what is the objection?

Purves et al. argue that it is an inherently problematic feature of military robots that they are programmed to perform targeted killings of human beings. In general, the idea of a machine preprogrammed to kill human beings is a disturbing and unacceptable idea to many people. The worry here about a possible analogy back to self‐driving cars is that they too might need to be programmed to kill humans under certain circumstances – namely, in the sense that was discussed in the first of these two articles (Purves et al., 2015).

That is, self‐driving cars might be programmed to crash in unavoidable accident scenarios in ways that would target certain people (e.g., the smaller number of people in a “trolley” scenario where the car can either crash into a bigger or smaller number of people). Would the fact that a self‐driving car would be pre‐programmed to kill people under certain circumstances undermine the argument in their favor based on the prediction that overall they will kill far fewer people than regular cars do?4

Purves and colleagues agree that self‐driving cars would indeed need to be programmed to kill under certain circumstances. But they think that this is not bad and immoral in the same way that it is bad and immoral, in their view, that military robots are programmed to kill. The argument of Purves et al. for this standpoint is reminiscent of the so‐called doctrine of double effect. They argue that whereas military robots are problematic because their primary goal is to kill people, self‐driving cars are not problematic in the same way. Self‐driving cars' main goal is not to crash into people. It is rather to allow people to drive in a safer way than they are able to do using regular cars. That self‐driving cars may need to be pre‐programmed to crash into people in rare circumstances is a foreseen and acknowledged bad side effect of self‐driving cars. But it is not the primary function they are meant to perform. Therefore, one can coherently oppose military robots even if they on the whole would make wars safer even as one is in favor of self‐driving cars even though they too would be pre‐programmed to kill human beings under certain circumstances (Purves et al., 2015).

How good is this argument? As I noted, the argument seems to have a lot in common with the doctrine of double effect. It is a general moral principle according to which causing harm as an intended side effect is more permissible than causing harm as a primary goal. This so‐called doctrine of double effect does to a large extent seem to fit with common sense, as David Edmonds argues in his book about the trolley problem (Edmonds, 2013). That being said, it is also worth noticing that many moral philosophers have presented general objections to the doctrine of double effect (e.g., Scanlon, 2008). Whether we should accept this doctrine falls outside of the scope of this discussion.