Introspection has become a commodity.

The idea that our ability to reflect has been outsourced to algorithms may seem hyperbolic. We assume we have agency regarding the choices we make, influenced by the paradigm of personalization but not subsumed within a Matrix of someone else’s making.

But how do you know? Have you created a list of activities you’d never delegate? Could you even discern where your moral boundaries end and codified biases begin?

While welcoming the feedback that sensors, data and Artificial Intelligence provide, we’re at a critical inflection point. Demarcating the parameters between assistance and automation has never been more central to human well-being. But today, beauty is in the AI of the beholder. Desensitized to the value of personal data, we hemorrhage precious insights regarding our identity that define the moral nuances necessary to navigate algorithmic modernity.

If no values-based standards exist for Artificial Intelligence, then the biases of its manufacturers will define our universal code of human ethics. But this should not be their cross to bear alone. It’s time to stop vilifying the AI community and start defining in concert with their creations what the good life means surrounding our consciousness and code.

The intention of the ethics

“Begin as you mean to go forward.” Michael Stewart is founder, chairman & CEO of Lucid, an Artificial Intelligence company based in Austin that recently announced the formation of the industry’s first Ethics Advisory Panel (EAP). While Google claimed creation of a similar board when acquiring AI firm DeepMind in January 2014, no public realization of its efforts currently exist (as confirmed by a PR rep from Google for this piece). Lucid’s Panel, by comparison, has already begun functioning as a separate organization from the analytics side of the business and provides oversight for the company and its customers. “Our efforts,” Stewart says, “are guided by the principle that our ethics group is obsessed with making sure the impact of our technology is good.”

Kay Firth-Butterfield is chief officer of the EAP, and is charged with being on the vanguard of the ethical issues affecting the AI industry and society as a whole. Internally, the EAP provides the hub of ethical behavior for the company. Someone from Firth-Butterfield’s office even sits on all core product development teams. “Externally,” she notes, “we plan to apply Cyc intelligence (shorthand for ‘encyclopedia,’ Lucid’s AI causal reasoning platform) for research to demonstrate the benefits of AI and to advise Lucid’s leadership on key decisions, such as the recent signing of the LAWS letter and the end use of customer applications.”

Ensuring the impact of AI technology is positive doesn’t happen by default. But as Lucid is demonstrating, ethics doesn’t have to stymie innovation by dwelling solely in the realm of risk mitigation. Ethical processes aligning with a company’s core values can provide more deeply relevant products and increased public trust. Transparently including your customer’s values in these processes puts the person back into personalization.

Lucid’s EAP has set a new standard for innovation within the AI industry versus the deferment of DeepMind. Their ethics of silence speaks volumes, inferring the prioritization of patents over open alignment with human values. But as Stewart notes, uncertain ethical practices do not make for good business decisions. “To wait until you’re already building products that are disruptive in negative ways to an industry doesn’t make sense. We want to apply best practices from the start, so our impact is ethically validated.”

The inspiration of introspection

“I see my role not as to say, no you can’t do this, but to ask what are the goals the engineer and designer has. How can we attain these goals without infringing on the cultural values we hold dear?” Aimee van Wynsberghe, PhD, is assistant professor of philosophy of technology at University of Twente in the Netherlands and a thought leader in the nascent Ethics Adviser industry. Beyond the standard duties of a person on a traditional ethics committee or Institutional Review Board (IRB), an adviser functions more as a designer than an academic. Beyond focusing on potential negative consequences, advisers provide methodologies for participants to foster more robust conceptions of a technology or product. For Artificial Intelligence, this format can inspire innovation. “This is where companies could really benefit,” says van Wynsberghe. “When an adviser acts as a member of the design team, they’re not placing limitations on the process but help create utopic visions leading to practical impact.”

“You can sometimes do things that are entirely legal yet highly unethical.” Roland van Rijswijk works at SURFnet, the National Research and Education Network in the Netherlands connecting academia and research institutes throughout the country. He recently worked with van Wynsberghe to create a booklet designed to help staff identify ethical issues concerning how their data would be used by outside researchers. He’s quick to note the booklet (soon to be made public) is not simply a checklist for getting approval but a blueprint for spirited staff discussions designed to form educated decisions. As the booklet points out, “Virtue ethics allows for a discussion beyond hard and fast rules or duties and goes further than a discussion of consequences alone. It demands one to search for inner motivation and commitment, to articulate their intentions behind an action.” Introspection breeds innovation by this articulation of accountability. While some ethical decisions may seem clear cut, values are subjective. It’s in discussing issues openly that communal understanding can enlighten previously unclear paths.

“The more we interact with systems engaging human intentionality, the more we’re going to have to understand ourselves.” Jake Metcalf is a fellow at Data & Society, a research institute in New York City focused on social and ethical issues regarding data-centric technology. He’s also co-founder of the consulting firm, Ethical Resolve and notes that in trying to figure out what ethical decisions a company needs to fulfill their values, they gain more insight into their product and market. The more a company can be self reflective, the more successful they’re going to be.

He admits the Socratic process is not always fun — clients will sometimes complain about his ‘aggravating questions’ — but it’s in posing these difficult scenarios that companies gain clarity about their business decisions. “To be a philosopher is really just to ask rigorously naïve questions,” says Metcalf. “You ask a question that might seem silly at first, but when you track it to its logical conclusion you get to basic values you may not have realized you had.”

The heuristics of humanity

“The risk of not starting a process for defining global ethical standards is that waiting could hinder innovation for AI by resulting in incompatible algorithms across companies.” Konstantinos Karachalios is managing director for the IEEE Standards Association, the consensus building organization that is part of IEEE, the world’s largest professional association of engineers. Along with interoperability issues, Karachalios believes part of AI standards has to include better control of personal data and identity. “It’s an illusion to say that privacy is dead. If a person doesn’t have agency surrounding their data and communications, how can they contribute to any democratic process?”

Karachalios’ perspective is refreshingly philosophical yet intentionally Socratic, designed to inspire global dialogue around AI ethics to help IEEE identify standards that advance and serve humanity. It’s a process that prioritizes people over machines. Karachalios feels moral standards imposed by veiled algorithms would deny deviation surrounding ethical choice. Artificial Intelligence would define individual and cultural standards and make humans always look inferior by comparison. “Part of what comprises our human dignity,” he says, “is our capacity for inconsistency, along with the values directing us to improve.”

Geoff Colvin is Fortune Magazine’s senior editor-at-large and author of Humans Are Underrated: What High Achievers Know That Brilliant Machines Never Will. The book describes how the economic value of human traits such as empathy is increasing as automation shifts the nature of society. Up until this point in history, machines have complemented our abilities versus substituted them. Now many economists feel the balance is shifting, where machine labor is inevitable, causing us to rethink morality around work. “This is not a new world economically,” says Colvin, “it’s a new world ethically.”

But his book isn’t focused on the semantics of employment, or when robots replace what. His hypothesis hinges on a subjective reality we must embrace as we create the ethical standards determining our fate. “I am arguing,” he says, “that there are certain experiences humans will value more highly in other humans even if a computer could do them.”

The morality of the moment

When it comes to scrutinizing our actions and influencing our emotions, the algorithms of the aggregated Internets already control our ethical identity. But we cannot evolve the human race based on the randomized outsourcing of our collective free will. It’s time for the Artificial Intelligence industry to prioritize the creation of ethical standards and leverage the innovation stemming from transparent dialogue with their stakeholders. And it’s time for the rest of us to buck up and support their efforts by embracing introspection before we lose the chance.

Because how will machines know what we value if we don’t know ourselves?