By Meagan Day

“Burglars and thieves work in a mathematical way,” said police chief Galen Carroll of Modesto, California, “whether they know it or not.”

Carroll was explaining to a reporter at the Modesto Bee why his department adopted a new software called PredPol, which stands for “predictive policing.” His staff had just compiled 10 years of crime data and fed it into the software, which uses an algorithm to spit out recommendations for where to patrol. PredPol’s 500-by-500-foot “hot spots” are identified in little red boxes on a Google map. It’s big data for beat cops.

A technology startup out of Santa Cruz, California, PredPol has a savvy, Silicon Valley-inspired business model. In order to access the software, police departments are required to refer other agencies first, and to appear in press conferences supporting the product before they’ve even tested it. As a result, PredPol is gaining massive momentum and sweeping the nation’s police departments, from Tacoma to Los Angeles to Atlanta.

A screenshot of PredPol in Atlanta © Atlanta Police Department

Proponents of predictive policing are aware that the idea is potentially off-putting. An influential 2013 report even contains a section titled The Nature of Predictive Policing: This is Not Minority Report. The title is a reference to a 1956 science fiction short story by Philip K. Dick, which was turned into a thriller starring Tom Cruise in 2002. In Dick’s dystopian fictional world, law enforcement agencies’ focus on “pre-crime,” or identifying perpetrators of crimes not yet committed, has devastating ramifications for individual freedom.

But crime forecasting has a history that predates even Minority Report. In fact, it’s as old as criminology itself — and its real-life application has been equally dystopian. In the late 19th century the Italian physician Cesare Lombroso, often referred to as the father of modern criminology, introduced his theory of “criminal atavism,” which claimed that you could predict who would commit a crime by simply looking at them.

Lombroso drew from phrenology — or the pseudoscientific study of cranial proportions, which was assumed in the 19th century to provide insight into a person’s moral and psychological character — to elaborate his theory. He believed that criminals were biological “throwbacks” to “primitive humanity” who bore the appearance of “savages.” The physical characteristics that identified someone as a “criminaloid” in Lombroso’s theory included asymmetrical features, high cheekbones, low foreheads and jug ears.

Illustrations of the facial features and other attributes supposedly typical of “born” criminals, by Cesare Lombroso (1876)

Lombroso introduced phrenology into the forensic field, with disastrous results. His ideas about who looked like a “savage” and therefore a “born criminal” were based on concepts of white supremacy, and their use in law enforcement perpetuated racial hierarchies. The racism of Lombroso’s theory wasn’t subtle: He cautioned psychologists and criminologists to be on the lookout for “oblique eyelids, a Mongolian characteristic” and “the projection of the lower face and jaws (prognathism) found in negroes.” While less explicit, his emphasis on the hooked nose, which “so often imparts to criminals the aspect of birds of prey,” plays right into anti-Semitic stereotypes.

With Lombroso’s help, racial profiling became the first sanctioned form of crime prediction. And it’s stuck with us ever since. Consider New York City’s stop-and-frisk policy, where police officers were encouraged to detain and search passers-by who appeared suspicious. Between 2002 and 2011, 90% of those stopped by police were black and Latino. It’s been over a century since Cesare Lombroso theorized that you could spot a criminal just by looking at them, and it’s since been scientifically debunked — but the police haven’t gotten the memo.

But wait, won’t PredPol and the new generation of predictive policing technology actually help get us out of this racial profiling habit? After all, a geolocation algorithm isn’t judging an individual based on their face shape or skin color.

According to critics, there are several ways in which algorithmic predictions of crime keep racial profiling alive. For one thing, the algorithm requires past crime data to generate new predictions. And that past crime data is not itself unbiased. At Fusion, Alexis Madrigal sat down with a black community leader in Santa Cruz, the homebase of PredPol, who explained it this way:

“If you put police in one area and they do a lot of arrests, that becomes a high-crime area. Because crime rate isn’t who does stuff, it’s who gets caught. If they had as many police out in the suburbs as they do in the inner city, the crime rate in the suburbs would go up. The data’s shaped by where the police are, and the police keep shaping the data.”

So because it uses past crime data, PredPol simply tells police to keep policing the areas they’ve been policing. And those areas are disproportionately home to low-income communities of color.

© Patrick Semansky/AP

And then there’s the fact that PredPol simply tells police where to go — not what to do or who to arrest once they get there. When police are patrolling, rather than responding to a specific incident, the choices they make are based on hunches. And those hunches are often informed by their thinking about who belongs in an area, who seems shifty, who reminds them of people they’ve arrested in the past — in short who looks like a criminal.

The element of racial profiling that characterizes so much of police thinking remains fundamentally unaltered by the technology. Furthermore, it’s easy to imagine how the technology might give police permission to engage in racial profiling where otherwise they might think twice. Accusations of personal bias can easily be deflected by referencing the algorithm.

There’s no proof yet that PredPol makes racial profiling by police any worse than it already is. But it doesn’t promise to alleviate the problem either. The practice of crime forecasting began with Cesare Lombroso’s theory that you could identify a criminal on first sight. Despite the illusory objectivity of algorithms, data and statistics, predictive policing technologies don’t break with that ugly tradition. They simply reframe it — in a little red box.