Caging people for being undocumented used to be the exception. Now it seems to be the rule.

Buried in a Reuters special report of ICE’s increasingly draconian detention policies under the Trump administration is a new bit of information about the agency’s “risk classification assessment” tool, an algorithm that ICE uses to determine whether someone should be detained or let out on bond after being arrested. Or at least it used to. According to Reuters, ICE changed the algorithm to recommend detention in every case in order to comply with Trump administration policies. A human at ICE can theoretically override that recommendation, but the number of immigrants in detention has skyrocketed.

The impact of these changes was immediate. The number of immigrants with no criminal history that ICE booked into detention tripled to more than 43,000 in 2017 from a year earlier, according to agency data. And while ICE continues to arrest more immigrants with criminal records than those without, the most recent data that the agency provided show that in the first 100 days of the Trump administration, the most serious crime committed by nearly half of those arrested was an immigration or traffic violation, not including drunken driving.[1]

This is the dystopian future.

At this point, all the arguments against using “predictive” algorithms in law enforcement are fairly well-circulated. They are generally what Cathy O’Neil calls “weapons of math destruction.” Computer algorithms that try to predict what a human will do in the future based on what they’ve done in the past (or where they live, or what their job is, or anything else that ends up being mostly a proxy for race and class) are largely ways of parking existing discrimination behind a wall of proprietary software while maintaining a face of neutrality.

Not to mention, the programs that determine things like how long a person’s sentence is, whether he gets out on parole, or what amount of cash bail is set, are generally owned by private companies. The actual math behind these programs are maintained under a veil of secrecy because “software” is “proprietary” and copyright trumps government transparency about who we put in cages in 2018.

But this is not either of those problems. This doesn’t fit into the well-developed answer about why algorithms are dangerous. This is not a warping of outcomes seeping into the algorithm indirectly through inputs that reflect decades of redlining, school segregation, and job discrimination. This is an agency taking a flawed but at least somewhat individualized way of determining an outcome and quietly resetting it to inflict maximum violence.

In 2016, the goalposts totally shifted. The goal of the algorithm used to be helping the government determine which arrestees were most likely to either be a danger to the community or fail to return to court (very few fit into either category!), so as to detain the riskiest people and release the others. The underlying legal and political assumption was that most people should be released, and there should be some sort of fairness in determining whether you, the individual person, were one of those people. The critique was that the agency’s notion of fairness usually sucked.

But the Trump administration pulled the rug from under the table. There’s no underlying assumption that the process should be fair (or, really, that there should be process at all). ICE’s mission is no longer to determine who deserves to be detained and who doesn’t. Its mission is to put people who are undocumented in cages before expelling them, and it’s easy enough to get a computer to tell agents to do that. So while the rest of us are still over here debating the ethics of artificial intelligence, the government has already stripped it of all the complexity we were worried about, somehow making it worse than we imagined in the process.

[1] It’s nearly impossible to get released on bond if you have a DUI.

Shane Ferro is a law student and a former professional blogger. She is (obviously) a bleeding-heart public interest kid.