Students report only the high school they attend and their address, and the College Board uses publicly available data to determine the scores from there. Crime rates, poverty rates, housing values, and the like are derived based on where students live. Family context, such as parents’ educational achievements, is based on averages in a student’s neighborhood.

Read: The reasoning behind the SAT’s new ‘disadvantage’ score

Although the index is aimed at diversifying universities, it does not use race to determine students’ scores. Black and white students in the same neighborhood would presumably receive the same scores, as the relevant information comes from city-level, publicly available data. Several states, including California and Oklahoma, ban public universities from considering race in admissions. One of the 2017 pilots took place at a college in Florida, which banned taking race into account in 1999.

Thus far, the index appears to be making good on its intentions. Yale University is one of the schools that is already using the adversity index on a trial basis for all applicants, The Wall Street Journal reports. Since last year, the number of low-income and first-generation freshmen the school admitted has doubled to almost 20 percent of its incoming class.

Indices such as the College Board’s new scoring system are, by definition, numerical. But adversity isn’t quantitative, it’s qualitative: the entirety of external influences in one’s life, and indeed one’s ancestors’ lives. All 15 factors that make up the index are measurable, but they’re also subjective, the result of decades or centuries of environmental and historical legacy.

The College Board is essentially trying to find a quantitative solution to the messy realities of entrenched privilege, realities that are only amplified by the very college-admissions system the board is hoping to improve. It’s a noble goal and an appealing premise: that algorithms—orderly, objective, unburdened by bias or history—can solve problems we humans can’t. But these systems are only as good as the metrics that feed their calculations, and the people making them.

Take, for example, crime rates. Any sociologist modeling crime will explain that the figure isn’t reflective of the actual number of crimes that happen in a given neighborhood or city, but rather of the number of crimes that are reported to police, which is complicated by a host of factors including the race of alleged perpetrators and a community’s relationship with its police force. (White-collar crime, for instance, is hugely underrepresented in many statistics.) Further, the notion of what is criminal varies. Let’s say two students in different states live in neighborhoods with identical rates of marijuana usage. The neighborhood in a state with legalized marijuana would see a very different crime rate and, potentially, students would receive a different score.