In some ways, artificial intelligence acts like a mirror. Machine learning tools are designed to detect patterns, and they often reflect back the same biases we already know exist in our culture. Algorithms can be sexist, racist, and perpetuate other structural inequalities found in society. But unlike humans, algorithms aren’t under any obligation to explain themselves. In fact, even the people who build them aren’t always capable of describing how they work.

That means people are sometimes left unable to grasp why they lost their health care benefits, were declined a loan, rejected from a job, or denied bail—all decisions increasingly made in part by automated systems. Worse, they have no way to determine whether bias played a role.

In response to the problem of AI bias and so-called “black box” algorithms, many machine learning experts, technology companies, and governments have called for more fairness, accountability, and transparency in AI. The research arm of the Department of Defense has taken an interest in developing machine learning models that can more easily account for how they make decisions, for example. And companies like Alphabet, IBM, and the auditing firm KPMG are also creating or have already built tools for explaining how their AI products come to conclusions.

"Algorithmic transparency isn’t an end in and of itself" Madeleine Clare Elish, Data & Society

But that doesn’t mean everyone agrees on what constitutes a fair explanation. There’s no common standard for what level of transparency is sufficient. Does a bank need to publicly release the computer code behind its loan algorithm to be truly transparent? What percentage of defendants need to understand the explanation given for how a recidivism AI works?

“Algorithmic transparency isn’t an end in and of itself,” says Madeleine Clare Elish, a researcher who leads the Intelligence & Autonomy Initiative at Data & Society. “It’s necessary to ask: Transparent to whom and for what purpose? Transparency for the sake of transparency is not enough.”

By and large, lawmakers haven’t decided what rights citizens should have when it comes to transparency in algorithmic decision-making. In the US, there are some regulations designed to protect consumers, including the Fair Credit Reporting Act, which requires individuals be notified of the main reason they were denied credit. But there isn’t a broad “right to explanation” for how a machine came to a conclusion about your life. The term appears in the European Union's General Data Protection Regulation (GDPR), a privacy law meant to give users more control over how companies collect and retain their personal data, but only in the non-binding portion. Which means it doesn't really exist in Europe, either, says Sandra Wachter, a lawyer and assistant professor in data ethics and internet regulation at the Oxford Internet Institute.

GDPR’s shortcomings haven’t stopped Wachter from exploring what the right to explanation might look like in the future, though. In an article published in the Harvard Journal of Law & Technology earlier this year, Wachter, along with Brent Mittelstadt and Chris Russell, argue that algorithms should offer people “counterfactual explanations,” or disclose how they came to their decision and provide the smallest change “that can be made to obtain a desirable outcome.”

For example, an algorithm that calculates loan approvals should explain not only why you were denied credit, but also what you can do to reverse the decision. It should say that you were denied the loan for having too little in savings, and provide the minimum amount you would need to additionally save to be approved. Offering counterfactual explanations doesn’t require the researchers who designed an algorithm release the code that runs it. That’s because you don’t necessarily need to understand how a machine learning system works to know why it reached a certain decision.