Transparency is becoming a hot topic in Silicon Valley. Companies need transparency in hiring, firing, and promotion practices in order to facilitate diversity efforts. Pundits and politicians are calling for transparency in how social media companies handle their advertising.

And now it’s time to think about more transparency for the artificial intelligence algorithms running our digital world. An organization called AI Now, comprised of researchers from Google Open Research, Microsoft Research, and New York University, recently issued a call for 10 changes the A.I. community needs to make in 2017. At the top of that list: “Core public agencies, such as those responsible for criminal justice, healthcare, welfare, and education should no longer use ‘black box’ A.I. and algorithmic systems.”

This seems necessary for A.I. algorithms used by government agencies, but it’s a good idea for Silicon Valley companies as well.

A black-box algorithm is one in which you only know the inputs and outputs. How the A.I., in this case, comes to its conclusions is a mystery to anyone not involved in its development. As Quartz points out, the application of these types of algorithms in the public sector raises questions when it comes to our right to “due process.” In Houston, public school teachers recently won a case around this exact problem. An algorithm evaluated their performance as teachers based on standardized test scores. The teachers cried foul, took it to court, and the system was eventually found to violate their right to due process. Because the evaluation algorithm was, essentially, a black box, the results of these evaluations could not be verified for accuracy.

“Algorithms are human creations, and subject to error like any other human endeavor,” U.S. Magistrate Judge Stephen Smith wrote in his ruling. Smith is quite right (with the possible exception of when A.I. algorithms train themselves independent of human involvement—but that’s another story). ProPublica, for example, recently found that software used to predict the likelihood a criminal would commit future crimes was biased against blacks.

A.I. is permeating our daily lives, in everything from handling health care payments to determining whose posts you see on Instagram. Users have no control over what algorithms decide to show us, and little understanding of why they may learn to show you one piece of content over another. The repercussions of this can be dramatic, as we’re seeing in the aftermath of last year’s presidential election.

The risk of letting A.I. run amok in consumer applications is the public losing trust in the technology. For companies such as Facebook, there is an argument to keeping the details of its A.I. under wraps: It’s the company’s unique intellectual property, a competitive advantage it can’t expose. But companies could detail a basic version of how an A.I. system works, and the data sets it was trained on, without revealing the intricate details of the specific algorithms and machine-learning techniques involved. Without some degree of basic knowledge of how a technology like this works, the public can become confused or scared. In the early days of location tracking on phones, people were also weary—sometimes, as it would turn out, with good reason.

Facebook does share some information about its artificial intelligence efforts, but at an extremely high level. These updates typically spend more time discussing what the A.I. technique can achieve, and how much it’s improved over past versions, than on how it accomplishes that task.

But if government agencies and tech companies alike stepped back and gave users a look at how their A.I. algorithms operate—explaining what information they use as input and what behaviors were made to teach it a certain way—it would remove conspiratorial concerns. Fear often stems from ignorance.

Earlier this year, IBM came out in favor of this kind of algorithmic transparency. In a letter IBM senior vice president David Kenny penned to Congress, he wrote:

We must help citizens understand how artificial intelligence technologies work, so they recognize that AI can serve to root out bias rather than perpetuate it. Companies must be able to explain what went into their algorithm’s decision-making process. If they can’t, then their systems shouldn’t be on the market.

This pretty much sums up where Silicon Valley needs to be—and luckily, there are steps being taken to get there.

Last week, a federal judge in New York ruled that the source code used by the New York Crime Lab in analyzing DNA evidence be made public. This code, dubbed the Forensic Statistical Tool, was recently retired in favor of a better version after being used in 1,350 cases over the course of 5½ years. As we’ve learned more about DNA analysis, we’ve learned that some methods aren’t as accurate as we would have thought. A group of New York City defense lawyers are now calling for the review of cases in which FST was used in order to learn how accurate—or inaccurate—it was.

Hopefully we’ll continue to see more organizations and corporations follow this example. You can’t trust data if you don’t know how it was gathered. And you can’t trust companies that use black-box A.I. unless they’re ready to own up to their methodology.