But one area where machine-learning algorithms still struggle is explaining to humans how and why they’re making particular decisions. That can be fine if computers are just playing games, but for more serious applications people are a lot less willing to trust a machine whose thought processes they can’t understand.

If AI is being used to make decisions about who to hire or whether to extend a bank loan, people want to make sure the algorithm hasn’t absorbed race or gender biases from the society that trained it. If a computer is going to drive a car, engineers will want to make sure it doesn’t have any blind spots that will send it careening off the road in unexpected situations. And if a machine is going to help make medical diagnoses, doctors and patients will want to know what symptoms and readings it’s relying on.

“If you go to a doctor and the doctor says, ‘Hey, you have six months to live,’ and offers absolutely no explanation as to why the doctor is saying that, that would be a pretty poor doctor,” says Sameer Singh, an assistant professor of computer science at the University of California at Irvine.

Singh is a coauthor of a frequently cited paper published last year that proposes a system for making machine-learning decisions more comprehensible to humans. The system, known as LIME, highlights parts of input data that factor heavily in the computer’s decisions. In one example from the paper, an algorithm trained to distinguish forum posts about Christianity from those about atheism appears accurate at first blush, but LIME reveals that it’s relying heavily on forum-specific features, like the names of “prolific posters.”

Developing explainable AI, as such systems are frequently called, is more than an academic exercise. It’s of growing interest to commercial users of AI and to the military. Explanations of how algorithms are thinking make it easier for leaders to adopt artificial intelligence systems within their organizations—and easier to challenge them when they’re wrong.

“If they disagree with that decision, they will be way more confident in going back to the people who wrote that and say no, this doesn’t make sense because of this,” says Mark Hammond, cofounder and CEO of AI startup Bonsai.