We need to scale “moralware” through leadership that is guided by our deepest shared values and ensures that technology lives up to its promise: enhancing our capabilities, enriching lives, truly bringing us together, and making the world more open and equal. This means seeing more than “users” and “clicks” but real people, who are worthy of the truth and can be trusted to make their own informed choices.

Meredith Whittaker

Co-founder and co-director, AI Now Institute

It’s hard to trust what you can’t see or validate. A.I. technologies aren’t visible to the vast majority of us, hidden behind corporate secrecy and integrated into back-end processes, obscured from the people they most affect. This, even as A.I. systems are increasingly tasked with socially significant decisions, from who gets hired to who gets bail to which school your child is permitted to attend. We urgently need ways to hold A.I., and those who profit from its development, accountable to the public. This should include external auditing and testing that subjects A.I. companies’ infrastructures and processes to publicly accountable scrutiny and validation.

It must also engage local communities, ensuring those most at risk of harm have a say in determining when, how or if such systems are used. While building these cornerstones of trust will require tech community cooperation, the stakes are too high to rely on voluntary participation. Regulation will almost certainly be necessary, as what’s required will necessitate major structural changes to the current “launch, iterate and profit” industry norms.

Deep Nishar

Senior managing partner, SoftBank Vision Fund

A.I. systems should assist humanity, not threaten it — enabling us to offload cumbersome tasks in favor of more meaningful activity. Intelligent systems are already helping us discover new eco-friendly materials, assisting doctors with care decisions, making factory production lines more efficient and accelerating drug discovery. Our outlook should be one of optimism, not fear.

Sara Menker

Founder and chief executive, Gro Intelligence

Trust can only be built if there is transparency and accountability. The tech community needs to come together and create frameworks that allow for transparency while respecting I.P. and the ability for models to learn, evolve and continuously change. Transparency is step one to accountability, and accountability is critical especially in domains such as health, food and education where A.I. can also be transformational.

Adena Friedman

President and chief executive, Nasdaq

As the saying goes, trust is earned, not given. Our experience in the capital markets has demonstrated that transparency is one of the keys to trust. To build trust, those applying A.I. to create new capabilities should consider how much to share — with their clients and other stakeholders — about the inputs used, logic within and ultimate outputs from their machine learning tools. The goal to gain trust should be to demystify the process of creating the new capabilities, not to treat it like a new magic that clients cannot comprehend. My view is that A.I., if provided with transparency, will ultimately allow all industries to leverage the best of humans and machines together to create better, safer and smarter solutions for customers.

Reid Hoffman

Co-founder and executive chairman, LinkedIn; partner, Greylock Partners

It’s important to significantly fund research and development on A.I. safety — so that the outcomes from A.I. will have very positive outcomes for humanity and contained risks. A.I. safety will include transparency on algorithms and processes. A.I. safety will include techniques for understanding the justice and fairness of data sets used to build machine learning. And A.I. safety will have a good sense of the parameters of operation of the machines. The industry should work together on A.I. safety, to maximize the outcomes for the world.