At a conference this week, Microsoft Research scientist and leader Eric Horvitz says that the company has given up "significant sales" because it was worried the customer would use AI for not-good purposes.

Microsoft clarifies with Business Insider that the company has never cut off an existing contract over these concerns, but has turned customers away.

Horvitz says that Microsoft has also placed contractual limits on what customers can do with AI, for ethical reasons.



Long-time Microsoft scientist Eric Horvitz says that the software company takes AI ethics so seriously, “significant sales have been cut off" because it was concerned that the potential customer would use its technology for no good.

Horvitz, a director and technical fellow with Microsoft Research, made his remarks on stage at Carnegie Mellon University's K&L Gates Conference on Ethics and AI on Monday, as originally reported by GeekWire.

I got in touch with Microsoft for more clarity on Horvitz's remarks. The company confirmed that Microsoft had never cut off a deal with an existing customer — Horvitz was referring to the loss of possible revenue from potential customers.

“Microsoft may decide to forego the pursuit of business proposals for numerous reasons, including the company’s commitment to upholding human rights," a spokesperson tells Business Insider.

Beyond just cutting off those deals, Horvitz says that Microsoft has placed limitations on what customers can do with its AI tech: “And in other sales, various specific limitations were written down in terms of usage, including ‘may not use data-driven pattern recognition for use in face recognition or predictions of this type,'" he said, per GeekWire.

That's an unusual point for Horvitz to make: Microsoft itself offers cloud-based services for developers to easily put facial recognition capabilities into their software. Still, Horvitz's remarks indicate that Microsoft is willing to place limits on what customers can and can't do with artificial intelligence. Microsoft declined to comment any further on that point.

"This committee has teeth"

In a more general sense, Horvitz was discussing Aether, an acronym for "AI and ethics in engineering and research," which is Microsoft's overall AI ethical oversight committee. “It’s been an intensive effort … and I’m happy to say that this committee has teeth,” Horvitz said.

“We believe it is very important to develop and deploy AI in a responsible, trusted and ethical manner. Microsoft created the Aether committee to identify, study and recommend policies, procedures, and best practices on questions, challenges, and opportunities coming to the fore on influences of AI on people and society," says a Microsoft spokesperson.

This approach is generally in line with Microsoft's public profile: The company's leadership has made much of the idea of ethical artificial intelligence, urging researchers, developers, and consumers to be responsible in how they use the technology.

"[We] want people to go forward in ways that are well informed, that are thoughtful, and in a sense, a commitment to shared responsibility. It is going to take a broad commitment to shared responsibility in order to ensure that AI is used well," Microsoft President Brad Smith told Business Insider earlier this year.

At the same time, the ethical use of AI is a hot topic in Silicon Valley: Earlier in April, Google employees petitioned the company's leadership to stop providing artificial intelligence to the military for use in drones.