Are ethics keeping pace with technology?

Returning from vacation, my inbox overflowed with emails announcing robot “firsts.” At the same time, my relaxed post-vacation disposition was quickly rocked by the news of the day and recent discussions regarding the extent of AI bias within New York’s financial system. These unrelated incidents are very much connected in representing the paradox of the acceleration of today’s inventions.

Last Friday, The University of Maryland Medical Center (UMMC) became the first hospital system to safely transport, via drone, a live organ to a waiting transplant patient with kidney failure. The demonstration illustrates the huge opportunity of Unmanned Aerial Vehicles (UAVs) to significantly reduce the time, costs, and outcome of organ transplants by removing human-piloted helicopters from the equation. As Dr. Joseph Scalea, UMMC project lead, explains “There remains a woeful disparity between the number of recipients on the organ transplant waiting list and the total number of transplantable organs. This new technology has the potential to help widen the donor organ pool and access to transplantation.” Last year, America’s managing body of the organ transplant system stated it had a waiting list of approximately 114,000 people with 1.5% of deceased donor organs expiring before reaching their intended recipients. This is largely due to unanticipated transportation delays of up to two hours in close to 4% of recorded shipments. Based upon this data, unmanned systems could potentially save more than one thousand lives. In the words of Dr. Scalea, “Delivering an organ from a donor to a patient is a sacred duty with many moving parts. It is critical that we find ways of doing this better.” Unmentioned in the UMMC announcement are the types of ethical considerations required to support autonomous delivery to ensure that rush to extract organs in the field are not overriding the goal of first saving the donor’s life.

As May brings clear skies and the songs of birds, the premise of non life saving drones crowding the air space above is often a haunting image. Last month, the proposition of last mile delivery by UAVs came one step closer with Google’s subsidiary, Wing Aviation, becoming the first drone operator approved by the U.S. Federal Aviation Administration and the Department of Transportation. According to the company, consumer deliveries will commence within the next couple of months in rural Virginia. “It’s an exciting moment for us to have earned the FAA’s approval to actually run a business with our technology,” declared James Ryan Burgess, Wing Chief Executive Officer. The regulations still ban drones in urban areas and limit Wings autonomous missions to farmlands but enable the company to start charging customers for UAV deliveries.

While the rural community administrators are excited “to be the birthplace of drone delivery in the United States,” what is unknown is how its citizens will react to the technology, prone to menacing noise and privacy complaints. Mark Blanks, director of the Virginia Tech Mid-Atlantic Aviation Partnership, optimistically stated, “Across the board everybody we’ve spoken to has been pretty excited.” Cautiously, he admits, “We’ll be working with the community a lot more as we prepare to roll this out.” Google’s terrestrial autonmous driving tests have received less than stellar reviews from locals in Chandler, Arizona, which reached a crescendo earlier this year with one resident pulling a gun on a car (one-third of all Virginians own firearms). Understanding the rights of citizenry in policing the skies above their properties is an important policy and ethical issue as unmanned operators move from testing systems to live deployments.

The rollout of advanced computing technologies is not limited to aviation; artificial intelligence (AI) is being rapidly deployed across every enterprise and organization in the United States. On Friday, McKinsey & Company released a report on the widening penetration of deep learning systems within corporate America. While it is still early in the development of such technologies, almost half of the respondents in the study stated that their departments have embedded such software within at least one business practice this past year. As stated: “Forty-seven percent of respondents say their companies have embedded at least one AI capability in their business processes—compared with 20 percent of respondents in a 2017 study.” This dramatic increase in adoption is driving tech spending with 71% of respondents expecting large portions of digital budgets going toward the implementation of AI. The study also tracked the perceived value of the use of AI with “41 percent reporting significant value and 37 percent reporting moderate value,” compared to 1% “claiming a negative impact.

Before embarking on a journey south of the border, I participated in a discussion at one of New York’s largest financial institutions about AI bias. The output of this think tank became a suggested framework for administrating AI throughout an organization to protect its employees from bias. We listed three principals: 1) the definition of bias (as it varies from institution to institution); 2) the policies when developing and installing technologies (from hiring to testing to reporting metrics); and 3) employing a Chief Ethics Officer that would report to the board not the Chief Executive Officer (as the CEO is concerned about profit, and could potentially override ethics for the bottomline). These conclusions were supported by a 2018 Deloitte survey that found that 32% of executives familiar with AI ranked ethical issues as one of the top three risks of deployments. At the same time, Forbes reported that the idea of engaging an ethics officer is a hard sell for most Blue Chip companies. In response, Professor Timothy Casey of California Western School of Law recommends repercussions similar to other licensing fields for malicious software, “In medicine and law, you have an organization that can revoke your license if you violate the rules, so the impetus to behave ethically is very high. AI developers have nothing like that.” He suggests that building a value system through these endeavors will create an atmosphere whereby “being first in ethics rarely matters as much as being first in revenues.”

While the momentum of AI adoption accelerates faster than a train going down a hill, some forward-thinking organizations are starting to take ethics very seriously. As an example, Salesforce this past January became one of the first companies to hire a “chief ethical and humane use officer,” empowering Paula Goldman: “To develop a strategic framework for the ethical and humane use of technology.” Writing this article, I am reminded of the words of Winston Churchill in the 1930s cautioning his generation about balancing morality with the speed of scientific discoveries, as the pace of innovation even then far exceeded humankind’s own development: “Certain it is that while men are gathering knowledge and power with ever-increasing and measureless speed, their virtues and their wisdom have not shown any notable improvement as the centuries have rolled. The brain of modern man does not differ in essentials from that of the human beings who fought and loved here millions of years ago. The nature of man has remained hitherto practically unchanged. Under sufficient stress—starvation, terror, warlike passion, or even cold intellectual frenzy—the modern man we know so well will do the most terrible deeds, and his modern woman will back him up.”

Join RobotLab on May 16th when we dig deeper into ethics and technology with Alexis Block, inventor of HuggieBot, and Andrew Flett, partner at Mobility Impact Partners, discussing “Society 2.0: Understanding The Human-Robot Connection In Improving The World” at SOSA’s Global Cyber Center in NYC – RSVP Today!