The ruling also notes that the criteria were based on Loomis' publicly available criminal history, and that he could have double-checked that the questions and answers on the report were accurate.

Needless to say, this decision won't make Loomis or other supporters happy. How do you tell when a judge is merely considering the algorithm's output versus relying on it, for instance? And how do you reconcile this decision with those from other courts, such as when a Minnesota court ordered the release of breathalyzer source code? As TechDirt says, there's a worry that only extreme recommendations will trigger concerns about dependence on the algorithm. You might not spot biases like racism or sexism simply because the data seems reasonable on the surface.