For the past several years, giant tech companies have rapidly ramped up investments in artificial intelligence and machine learning. They’ve competed intensely to hire more AI researchers and used that talent to rush out smarter virtual assistants and more powerful facial recognition. In 2018, some of those companies moved to put some guardrails around AI technology.

The most prominent example is Google, which announced constraints on its use of AI after two projects triggered public pushback and an employee revolt.

The internal dissent began after the search company’s work on a Pentagon program called Maven became public. Google contributed to a part of Maven that uses algorithms to highlight objects such as vehicles in drone surveillance imagery, easing the burden on military analysts. Google says its technology was limited to “nonoffensive” uses, but more more than 4,500 employees signed a letter calling for the company to withdraw.

In June, Google said it would complete but not renew the Maven contract, which is due to end in 2019. It also released a broad set of principles for its use of AI, including a pledge not to deploy AI systems for use in weapons or “other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people.” Based in part on those principles, Google in October withdrew from bidding on a Pentagon cloud contract called JEDI.

Google also drew criticism after CEO Sundar Pichai demonstrated a bot called Duplex with a humanlike voice calling staff at a restaurant and hair salon to make reservations. Recipients of the calls did not appear to know they were talking with a piece of software, and the bot didn’t disclose its digital nature. Google later announced it would add disclosures. When WIRED tested Duplex ahead of its recent debut on Google’s Pixel phones, the bot began the conversation with a cheery “I’m Google’s automated booking service.”

LEARN MORE The WIRED Guide to Artificial Intelligence

The growth of ethical questions around the use of artificial intelligence highlights the field’s rapid and recent success. Not so long ago, AI researchers were mostly focused on trying to get their technology to work well enough to be practical. Now they’ve made image and voice recognition, synthesized voices, fake imagery, and robots such as driverless cars practical enough to be deployed in public. Engineers and researchers once dedicated solely to advancing the technology as quickly as possible are becoming more reflective.

“For the past few years I’ve been obsessed with making sure that everyone can use it a thousand times faster,” Joaquin Candela, Facebook’s director of applied machine learning, said earlier this year. As more teams inside Facebook use the tools, “I started to become very conscious about our potential blind spots,” he said.

That realization is one reason Facebook created an internal group to work on making AI technology ethical and fair. One of its projects is a tool called Fairness Flow that helps engineers check how their code performs for different demographic groups, say men and women. It has been used to tune the company’s system for recommending job ads to people.

A February study of several services that use AI to analyze images of faces illustrates what can happen if companies don’t monitor the performance of their technology. Joy Buolamwini and Timnit Gebru showed that facial-analysis services offered by Microsoft and IBM’s cloud divisions were significantly less accurate for women with darker skin. That bias could have spread broadly because many companies outsource technology to cloud providers. Both Microsoft and IBM scrambled to improve their services, for example by increasing the diversity of their training data.