A.I.-powered robots are coming, and a group working with the world’s largest technical professional association wants to make sure their creators factor in thousand-year-old ethical traditions before it’s too late. John Havens is at the forefront of a group advocating for an ethically minded approach to design, and on Monday, he published a new document aimed at teaching the minds behind the robo-revolution how to avoid unforeseen consequences of their work.

“I’m very proud of it,” Havens, executive director of the IEEE Global Initiative, tells Inverse. “It’s globally created by experts, and then globally crowdsourced by more experts to edit. It’s a resource. It’s a syllabus. It’s sort of a seminal must-have syllabus in the algorithmic age.”

The 290-page Ethically Aligned Design, released under a Creative Commons license, is the culmination of three years’ work trying to grapple with the biggest questions around how autonomous and intelligent systems (or “A/IS”) maintain human values and ethical principles through smart design. Its authors see it as an essential document for preparing future innovators to avoid disasters like the Cambridge Analytica scandal or the development of racist A.I. systems.

"“If the patient is in the room with their family, the last thing you’d want to do is have the robot say, like, ‘Mr. Smith is going to die in 20 minutes!’"

Havens hopes the document will encourage designers to consider potential unintended consequences of their work, giving the example of a robot in a hospital that’s been designed to provide useful information to caregivers. While most humans could read the room and dispense advice accordingly, the robot may struggle.

“If the patient is in the room with their family, the last thing you’d want to do is have the robot say like, ‘Mr. Smith is going to die in 20 minutes!’” Havens says. “No, that would suck, right? But this is the type of unintended consequences that we’re really trying to address with the entire document.”

Pepper, a robot designed to dispense advice. Inverse

The question of how to provide ethical guidance to non-extant machines sounds like an conundrum from The Good Place, but the scale of the task led to a wide-scale expansion over the years. When Inverse covered the launch of the first draft at the end of 2016, the initiative had around 200 members. But when the document launched with an appeal for more input, respondents sent over 300 pages of feedback.

“A lot of it was people saying this is really good work, but it feels very Western, indicating that it’d be good to get more members from, say, China, Japan, and South Korea, Africa, etc.,” Havens says. “So our tactic was to reach out to those people who had given the feedback and say, ‘Thank you, do you want to join the initiative?’”

This input has been vital for understanding how a robot designed for international markets could misunderstand local traditions. Havens notes that Western ethical traditions from the likes of Aristotle focus on the individual level before a community, which manifests itself in European law, human rights law, and the United Nations. East Asian traditions like Confucianism and Shinto, as well as Ubuntu ethics in Africa, focus on prosperity for the community and others before focusing on the individual.

“This is more like building a robot that’s got autonomous aspects to it, and then releasing it in the States, in Japan and Africa, without understanding these core traditions that have really helped people frame and see the world,” Havens says. “So you really have to immerse yourself and understand those end-user values.”

These values lead to a lot of big philosophical questions, and rather than prescribing specific steps, the document’s section on classical ethics instead lays out the various approaches and tries to explain the issues at stake.

While virtue ethics question the goal or purpose of A/IS and deontological ethics question the duties, the fundamental question asked by Ubuntu would be, “How does A/IS affect the community in which it is situated?”

These ideas are fleshed out in a variety of chapters covering policy, law, personal data, and methods to help guide design.

The resultant document summarizes its teachings into three “pillars.” The first focuses on universal human values and ensuring systems respect these traditions, the second is political self-determination that can help build trust in society, and the third is ensuring technical dependability to develop trust in the service. These are mapped onto eight principles that can help put these pillars into practice.

The pillars and how they match with the principles. IEEE

As the project has progressed, it’s gradually accounted for new technologies. The second draft, released in December 2017, improved by adding new committees that focused on areas like well-being and mixed reality. In the case of the latter, association member Monique Morrow warned Inverse about the prospect of hijacking photo-realistic avatars in cyberspace and committing crimes. Developers of these virtual worlds will also have to consider how people may feel about random events and lack of agency, which may be more distressing than in real life where random events are accepted as part of everyday life.

Around 1,000 people were part of the project by this stage and still, it again received over 300 pages of feedback on the draft. The team expanded once more for the final draft and now accounts for around 2,100 people.

The team is not stopping here. The plan is to now develop a syllabus for universities to help teach the next generation these values. There is also a plan to create abridged versions, with the first two focused on engineering and law so industry professionals can get the info that matters most to their particular work. The second edition is aimed for either December 2019 or the first quarter of 2020.

With SingularityNET’s Ben Goertzel predicting the first human-level A.I. could arrive in just 10 years’ time, it could be the ideal time to look back on several thousand years of ethical traditions.