Report: Crime prediction software less accurate than random people

Technology | 19.01.2018

A popular program called COMPAS claims it can predict if criminal defendants will commit more crimes, and has been used by judges across the US. However, a new study has found the algorithm to be no better than a human.

A new study by researchers at Dartmouth College has revealed that popular software used by some US courts to predict the likelihood of repeated criminal offenses by an individual is no more accurate "than predictions made by people with little or no criminal justice expertise."

"We are the frequent subjects of predictive algorithms that determine music recommendations, product advertising, university admission, job placement, and bank loan qualification. In the criminal justice system, predictive algorithms have been used to predict where crimes will most likely occur, who is most likely to commit a violent crime, who is likely to fail to appear at their court hearing, and who is likely to reoffend at some point in the future."

The program in question is called Correctional Offender Management Profiling for Alternative Sanctions (COMPAS). Manufactured by a firm called Northpointe, which has rebranded itself as Equivant, the tool claims it can determine if a criminal defendant is likely to commit another crime based on a 137-point questionnaire.

Some of the questions include "if you lived with both parents and they later separated, how old were you at the time," and "how many of your friends have ever been arrested?"

COMPAS' algorithm has been used by judges across the US before parole decisions, pretrial motions and sentencing hearings for repeat offenders.

Random people more accurate than computers

However, according to the report published by Science magazine, COMPAS, a similar program using a 2-point questionnaire and random people on the internet have about the same accuracy in predicting recidivism.

Soliciting the help of random individuals on Amazon's Mechanical Turk marketplace, researchers Julia Dressel and Hany Farid found that all three groups had an accuracy rating of between 65 and 67 percent, with humans scoring a little higher than the computers.

This is not the first time COMPAS has received intense criticism after its math was put to the test. A ProPublica report in May 2016 accused the software of being biased against African Americans, with the predictions for second offenses being "nearly twice as high as their white counterparts" even for those who don't reoffend.

COMPAS refuted the claim that its software was racist, amounting to, as Dressel and Farid put it, "different definitions of fairness."

According to the Atlantic, COMPAS' results are so dubious that the Wisconsin state Supreme Court has urged against the program's use.

Elizabeth Schumacher