When Cicurel and her team looked more closely at the assessment technology, they discovered that it hadn’t been properly validated by any scientific group or judicial organization. Its previous review had come from an unpublished graduate-student thesis. Cicurel realized that for more than a decade, juvenile defendants in Washington, D.C., had been judged, and even committed to detention facilities, because the courts had relied on a tool whose only validation in the previous 20 years had come from a college paper.

The judge in this case threw out the test. But criminal-assessment tools like this one are being used across the country, and not every defendant is lucky enough to have a public defender like Rachel Cicurel in his or her corner.

In the latest episode of Crazy/Genius, produced by Patricia Yacob and Jesse Brenneman, we take a long look at the use of AI in the legal system. Algorithms pervade our lives. They determine the news we see and the products we buy. The presence of these tools is relatively obvious: Most people using Netflix or Amazon understand that their experience is mediated by technology. (Subscribe here.)

But algorithms also play a quiet and often devastating role in almost every element of the criminal-justice system—from policing and bail to sentencing and parole. By turning to computers, many states and cities are putting Americans’ fates in the hands of algorithms that may be nothing more than mathematical expressions of underlying bias.

Perhaps no journalist has done more to uncover this shadowy world of criminal-justice AI than Julia Angwin, a longtime investigative reporter. In 2016, Angwin and a team at ProPublica published a detailed report on COMPAS, a risk-assessment tool created by the company Equivant, then called Northpointe. (After corresponding over several emails, Equivant declined to comment for our story.)

In 2013, a Wisconsin man named Paul Zilly was facing sentencing in a courtroom in Barron County. Zilly had been convicted of stealing a lawn mower, and his lawyer agreed to a plea deal. But the judge consulted COMPAS, which had determined that Zilly was a high risk for future violent crime. “It is about as bad as it could be,” the judge said of the risk assessment, according to the ProPublica report. The judge rejected the plea deal and imposed a new sentence that would double Zilly’s time in prison.

Angwin and her team wanted to know more about the COMPAS algorithm: It seemed unfair, but was it truly biased? They got access to the COMPAS scores of 7,000 people arrested in Broward County, Florida, and compared those scores with the criminal histories of those same people over the next few years. “The score proved remarkably unreliable in forecasting violent crime,” they found. “Only 20 percent of the people predicted to commit violent crimes actually went on to do so.” They also concluded that the algorithm was twice as likely to falsely flag black defendants as future criminals as it was to falsely flag white defendants.