Innovative technology has always driven human advancement and opportunities, but it also leads us all down a rabbit hole of moral and ethical dilemmas.

Today’s ethical battles are all too real with consequences that are often unimaginable, simply because of the rapid pace at which technology develops and opportunities being unlocked through technologies such as artificial intelligence (AI). Those at the forefront of technological innovation are becoming increasingly aware of the moral dilemmas arising from the use of autonomous vehicles, CRISPR, genetic engineering, weaponized drones, and more.

“Today, American schoolchildren stage mock trials of President Harry Truman, who authorized the (nuclear) bombings, to decide whether or not to impeach him for his decision,” writes Richard Rhodes, who authored the Pulitzer Prize-winning book The Making of the Atomic Bomb.

“It’s easy to forget how things were seventy years ago from the relative safety of our twenty-first century vantage point. Truman’s secretary of state, Jimmy Burns, told the President that summer of 1945 that he would probably be impeached if he didn’t use the bombs if they would save American lives.”

As with all ethical dilemmas, there will be people on each side and, ultimately, their choices will be judged in a different context than was faced at the time the decision was made.

“Something else I thought I recognized twenty-five years ago has now accumulated that many more years of evidence: the development of nuclear weapons permanently changed the course of war itself, as the scientists who worked on those first bombs hoped it would. By packaging the escalated final destructive months of war into portable devices capable of immediate and certain delivery, the new weapons made large-scale war too deadly to fight.”

The Need for a Global Ethics and Technology Framework

The Treaty on the Non-Proliferation of Nuclear Weapons was a landmark agreement with the objective of preventing the spread of nuclear weapons and weapons technology, as well as to promote the development of nuclear technology for energy.

Though 190 states have subscribed to the global ethical framework on managing nuclear technology, no such global structure exists to guide those working in advanced science and technology research and development. In many ways, an international effort to build such an ethical system that does not place undue restrictions on innovation would be ideal. However, the speed at which governments are likely to agree and put in place such a system would struggle to keep up with the rate at which technology is advancing. Science and technology can save the world from itself, but an ethical framework is needed to ensure we survive this transformation. Perhaps, then, in the immediate future, the best solution should be designed by those in industry and self-imposed.

Ethics & Technology: Artificial Intelligence

AI and computer learning technology give rise to perhaps the largest array of ethical issues in modern history. Already, its uses are being implemented, often unquestioned, without a firm understanding of the factors leading to the answers the programs spit out.

Daniel Cossins from New Scientist explains, “Modern life runs on intelligent algorithms. The data-devouring, self-improving computer programmes that underlie the artificial intelligence revolution already determine Google search results, Facebook news feeds and online shopping recommendations.”

There is a feeling that computers and algorithms simply can’t be biased in the same way humans are. However, that’s not the case. Biases have cropped up in a number of intelligent algorithms that deeply impact people’s lives.

First, there is PredPol, which is designed to predict when and where crimes will take place. However, it was found that the program had a bias for sending officers to neighborhoods with a high proportion of people from racial minorities, regardless of the crime rate. Furthermore, this can quickly become a confirmation bias as the data being fed into such a system is based on its previous choices.

Another program found to carry a bias in the justice system is COMPAS, which is used to guide court sentencings. There are dozens of prominent programs that are known to carry an unintentional bias. Biases can even be trained into AI programs, creating a system that is presented as unbiased but is infact maliciously designed.

An article in Fast Company suggests that, to combat bias and a host of other ethical dilemmas in AI technology, it’s necessary to create transparent standards, open-source code, and make AI generally less inscrutable.

One organization already working toward this is the nonprofit AI Now, which advocates for algorithmic fairness. They propose a simple guideline for developers: when it comes to services for people, if designers can’t explain an algorithm’s decision, you shouldn’t be able to use it.

AI is also expected to displace an incredible number of workers in the coming years. An Oxford study suggested up to 47% of US jobs are at high risk to being eliminated due to AI and machine learning (ML). As the livelihoods of many are destroyed and a massive amount of surplus workers are created, an understanding of the impact these developments have on society and a framework to manage that should be in place.

CRISPR Improves, Forces the Conversation

CRISPR technology is a simple yet powerful tool for editing genes. It’s thought that the technology could be used to prevent certain genetic diseases, such as diabetes and muscular dystrophy. However, the tool could also be used to create “designer babies” — leading to myriad moral dilemmas.

“Safety is something we all agree is important. There’s a language for that that’s acceptable to all sides,” Josephine Johnston, director of research at the Hastings Center, told Wired. “But the other kinds of concerns that people have about this work are much more difficult to have conversations about.”

As CRISPR technology improves, the safety concern is rapidly decreasing, bringing the other ethical issues to the foreground. These issues include the role humans should have in making permanent genetic changes, as well as the consent of unborn babies. There is also a balance of priorities: do scientist and doctors maximize humanity’s well-being or respect the differences created through our genetic makeup?

Here, again, a global structure on the ethics and technology with CRISPR — once it is deemed safe — can guide further development and usage, preventing a chapter in modern history similar to the one in which Americans embraced the eugenics movement, sterilizing certain members of its population in the name of genetically improving the citizens of the country.

Weaponized Drones

In many ways, the moral dilemma of weaponized drones can be tied to the conversations President Truman was having with his staff as he weighed the options of dropping nuclear bombs on Japan.

Weaponized drones allow countries to kill enemies without risking the lives of their citizens — and governments’ priorities are to protect the physical and economic safety of their people. However, critics point out that using drones makes it far too easy to kill people, as it removes the human factor from the situation. Additionally, they argue that drones cause far more civilian deaths than governments admit, which further fuels terrorism.

As with all forms of ethical challenges, failing to make a decision and create further regulations about how drones should be used is really making a decision to allow the status quo.

Final Thoughts

As the ethical issues of various types of innovative technology rage, the most important step to be taken is to develop a self-imposed framework for the science, technology, engineering, and mathematics research and development ecosystem while encouraging a global effort to formulate a framework similar to the nuclear non-proliferation agreement.

rLoop remains aware of the moral dilemmas that arise when considering ethics and technology, which has led us to prioritize the sort of self-imposed ethical framework needed to ensure that our work is used for the greater good of humanity. We would love to work with government and industry to help encourage productive conversations to develop a framework that does not stymie innovation. If you’d like to discuss this further, I invite you to email me at brentlessard@rloop.org. Have questions about ethics and technology? Post them on our subreddit!