Last Friday Gillian Tett ran a profoundly disturbing article in the Financial Times entitled Mapping Crime – Or Stirring Hate? (hat tip Marcos Carreira), which makes me sad to say this given how much respect I normally have for her regarding her coverage of the financial crisis.

In the article, Tett describes the predictive policing model used by the Chicago police force, which told the police where to go to find criminals based on where people had been arrested in the past.

Her article reads like an advertisement for racist profiling. First she deftly and indirectly claims the model is super successful at lowering the murder rate without actually coming out and saying so (since she actually has only correlative evidence):

And when Weis launched the programme in early 2010, together with a clever policeman-cum-computer expert called Brett Goldstein, it delivered impressive results. In the first year the murder rate fell 5 per cent and then continued to tumble. Indeed by the summer of 2011 it looked as if Chicago’s annual death toll would soon drop below 400, the lowest since 1965. “The homicide rates for that summer were just crazy low compared to what we had been,” Weis observes. But then, following his departure from the force, the programme was wound down in late 2011. And, tragically, the murder rate immediately rose again.

Here’s the thing, it’s really hard to actually know why murder rates go up and down. In New York City we’ve been using Stop & Frisk as the violent crime rates have been steadily lowering in this city (and many others), and for a long time Bloomberg took credit for that through the Stop & Frisk practice. But when Stop & Frisk rates went down, murder rates didn’t shoot up. Just saying. And that’s ignoring how reliable the police data is, which is another issue. Let’s take a look at her evidence for a longer time frame:

The reason I’m pointing out her bad statistics is that she needs them to set up the following, truly disturbing paragraphs (emphasis mine):

But while racism is rightly deemed unacceptable, computer programs pose more subtle questions. If a spreadsheet forecast has a racial imbalance, is this likely to reinforce existing human biases, or racial profiling? Or is a weather map of crime simply a neutral tool? To put it another way, does the benefit of using predictive policing outweigh any worries about political risk? Personally, I think it does. After all, as the former CPD computer experts point out, the algorithms in themselves are neutral. “This program had absolutely nothing to do with race… but multi-variable equations,” argues Goldstein. Meanwhile, the potential benefits of predictive policing are profound.

No, Gillian Tett, there is no such thing as a neutral tool. No algorithm focused on human behavior is neutral. Anything which is trained on historical human behavior embeds and codifies historical and cultural practices. Specifically, this means that the fact that black Americans are nearly four times as likely as whites to be arrested on charges of marijuana possession even though the two groups use the drug at similar rates would be seen by such a model (or rather, by the people who deploy the model) as a fact of nature that is neutral and true. But it is in fact a direct consequence of systemic racism.

Put it another way: if we allowed a model to be used for college admissions in 1870, we’d still have 0.7% of women going to college. Thank goodness we didn’t have big data back then!

This is very scary to me, when even Gillian Tett, who famously predicted the financial crisis in 2006, can be fooled. We clearly have a lot of work to do.