This is Design Bias , a monthly Motherboard column in which writer Rose Eveleth explores the products, research programs, and conclusions made not necessarily because any designer or scientist or engineer sets out to discriminate, but because to them the "normal" user always looks exactly the same. The result is a world that's biased by design. -the Editor

There are lots of conversations about the lack of diversity in science and tech these days. In response, people constantly ask, "So what? Why does it matter?" There are many ways to answer that question, but perhaps the easiest is this: because a homogenous team produces homogenous products for a very heterogeneous world.

Are you trustworthy? For centuries this was a qualitative question, but no longer. Now you have a number, a score, that everybody from loan officers to landlords will use to determine how much they should trust you.

Credit scores are often presented as objective and neutral, but they have a long history of prejudice. Most changes in how credit scores are calculated over the years—including the shift from human assessment to computer calculations, and most recently to artificial intelligence—have come out of a desire to make the scores more equitable, but credit companies have failed to remove bias, on the basis of race or gender, for example, from their system.

More recently, credit companies have started to use machine learning and offer "alternative credit" as a way to reduce bias in credit scores. The idea is to use data that isn't normally included in a credit score to try and get a sense for how trustworthy someone might be. All data is potential credit data, these companies argue, which could include everything from your sexual orientation to your political beliefs, and even what high school you went to.

But introducing this "non-traditional" information to credit scores runs the risk of making them even more biased than they already are, eroding nearly 150 years of effort to eliminate unfairness in the system.

How credit scores evolved from humans to AI

In the 1800s, credit was determined by humans—mostly white, middle-class men—who were hired to go around and inquire about just how trustworthy a person really was. "The reporter's task was to determine the credit worthiness of individuals, necessitating often a good deal of snooping into the private and business lives of local merchants," wrote historian David A. Gerber.