The Durham police of England are now gearing up to make judgments on whether a suspect should be kept in custody or not, depending on an AI result. Putting human lives under the scrutiny of artificial intelligence may sound odd but with the advent of technology, this may become the norm in future to avoid man-made errors. Here's all that you need to know.

Algorithm Whether suspects should be remanded or not

Harm Assessment Risk Tool (HART) is an algorithm which will now help Durham police decide whether a suspect should walk free or not. The system will start operating within the next 2-3 months and take calls based on data collated over the years to classify the criminals at "low, medium and high risk" of doing something criminal if released.

Quote Forecasting risk of future harm

"The basic logic is to use the prior histories of thousands of people arrested and processed in Durham to forecast the level of risk of high harm they will cause by criminal acts within two years after they are arrested." -Professor Lawrence Sherman.

Accuracy How accurate are these predictions?

The algorithm has been supplied with collected data of around 5 years, which includes the suspects' "offending history, gender, and postcode." HART when tested in 2013, predicted with 98% accuracy in case of suspects with low risk and 88% accuracy when it came across high-risk suspects. However, during that period HART's results were only monitored and not taken into account while making the judgment.

Race Will racial bias also get into the mix?

News website ProPublica had released a report which showed another AI system, used by Florida authorities, had an increased instance of racial-bias with extreme negative forecasts towards black suspects compared to the white ones. While HART doesn't include race when it comes to predictions of custody, officials have expressed that quasi-awareness may arise in future and certain volatile postal-codes may evoke tendencies of bias.

Quote Bringing unwanted emotions out

"To some extent, what learning models do is bring out into the foreground hidden and tacit assumptions that have been made all along by human beings." -Prof Cary Coglianese, a political scientist at the University of Pennsylvania.

HART Walking on a very thin line

On being asked how accurate HART was in making judgments, the authorities were told by the researchers that the algorithm makes use of several predictors which can't be swayed. Moreover, HART's was more of an "advisory" role and not the final call, and if any debate arose on how the system reached a specific conclusion, an audit trail would be provided for scrutiny.

Limitations HART is "interesting" but not error free