Jim Davies's suggestion that we programme ethics into artificial intelligence meta-systems as a safeguard could well backfire — by compromising our abilities to judge ethical implications (Nature 538, 291; 2016).

In an earlier version of the future, robot lawnmowers and kitchen appliances promised us more leisure time. We now face the spectre of mass human displacement from a consumption-based economy by equipment that can do things much more efficiently than people can.

The 'age of information' promised global connectivity, but this has wrought distraction to a point at which only lurid excesses can focus our undivided attention on the society to which we all belong.

And as computer-generated imagery colonizes our imaginations, many are barely swayed by real violence (the wanton destruction of Syrian cities comes to mind). There is even evidence that video gaming driven by computer-generated imagery can alter a player's perception of acceleration and gravity (see, for example, A. B. Ortiz de Gortari and M. D. Griffiths Int. J. Hum. Comput. Interact. 30, 95–105; 2014) — compromising their decision-making skills in a world where real physics is the law. Such trends don't bode well for 'ethical' computers.

Author information Affiliations Ocean Conservation Research, Lagunitas, California, USA Michael Stocker Authors Michael Stocker View author publications You can also search for this author in PubMed Google Scholar Corresponding author Correspondence to Michael Stocker.

Rights and permissions Reprints and Permissions

About this article Cite this article Stocker, M. Be wary of 'ethical' artificial intelligence. Nature 540, 525 (2016). https://doi.org/10.1038/540525b Download citation Published: 21 December 2016

Issue Date: 22 December 2016

DOI : https://doi.org/10.1038/540525b