My name is Mutale and I am an expert in Artificial Intelligence (A.I.) Governance. This is my area of expertise, yet every time I say it, it sounds strange to me. That’s mainly because I’m one of the few black women in this space. And although I would prefer to work behind the scenes, I still walk into too many rooms where people do not believe machines can be racist for me to stand on the sidelines. These interactions can be really difficult and I still do not know how to handle the discomfort. But what’s the alternative?

Artificial intelligence is changing everything from how we are selected for jobs to whether or not we are given bank loans. A recent McKinsey report found A.I. technologies have the potential to add somewhere between $3.5 and $5.8 trillion to the U.S. economy, across 19 industries, through the collection and monetization of our data. This is incredibly exciting but it also worries me. A.I. systems cannot read social context, yet they are being developed to meet the needs of all humanity.

My question: who is behind the scenes creating this wealth? A quick look at the Google A.I. research website provides some insight: it lists 893 people working on “machine intelligence.” Only one is a black woman — yes, I said one — the number before two and after zero.

The same is true of Facebook, which lists 146 people on its A.I. Research Team yet none are black — not one. It all makes sense when you realize how many black women are getting PhDs in Computer Science. A study by the Computing Research Association found that four black women graduated with a PhD in Computer Science in 2014. The future is here y’all and it’s pretty vanilla.

My career was launched through the Data & Society Fellowship and supported by the good people at A.I. Ethics Twitter. Being one of the first means my work has been featured heavily in the press. I have spoken at universities throughout the United States and will be going to Germany to deliver a keynote next month. However, I cannot and do not want to be the only one.

A.I. is here and it can be racist

To understand the implications of black women being left out of this field, we have to examine the social context in which A.I. technology is built. Brandies Marshall, professor of computer science at Spelman College, says that algorithms are the “brains” of all technological systems. In order to make these decisions, A.I. researchers train algorithms through a process commonly known as machine learning (ML).

ML is a branch of artificial intelligence, a subfield of computer science. ML works by identifying patterns in large amounts of population data (or “big data”), making associations, and uses these to predict future behavior. These predictions are converted into statistical models, which are used to make decisions about our everyday lives.

For example, police forces may use historical stop data to create a statistical model to identify “crime hot spots”. However, if the stops happen through practices like stop and frisk in which 90% of the men of color targeted were innocent, the dataset is not actually a record of crime, but rather an example of police stopping innocent black and brown men. Police forces are given this data and dispatch large numbers of police officers into these communities. The data is being misused to predict criminal behavior.

As mathematician Cathy O’Neil points out, there is an assumption here that algorithmic decision making is objective, when in fact it actually encodes the biases of its developers into statistical models — what she calls weapons of math destruction, because of the impact algorithmic bias has on the lives of minoritized communities.

What is worse, sociologist Eduardo Bonilla-Silva recently discovered a phenomenon called color-blind racism, which is characterized by people insisting they do not see color, yet at the same time doing nothing to dismantle racist systems. I’ll let that sink in.

I’ve just published a report which found computer scientists typically describe themselves as color blind. They do not have the racial literacy to identify and remove racial proxies, like stop and frisk training data during the algorithmic design phase! So we are out here with a bunch of racist A.I. systems being used to decide everything from when our cars will stop, to which types of people are offered health care.

Wait. I am not finished y’all. Algorithms are thought to be the “secret sauce” that drive A.I. technologies. These are protected by intellectual property laws — that means we cannot hold these companies accountable for racial discrimination because we do not know how they make their determinations!

This is where I come in. I see myself as a descendant of Ida B. Wells, the famous anti-lynching journalist/activist. However, I am using evidence of A.I. bias to introduce legislation that protects the digital civil rights of African Americans.

My work grapples with how to develop new legal frameworks to hold tech companies accountable. Currently, we can only prosecute for intentional acts of discrimination, but how can we legislate against the unintended consequences of racial proxies?

The term A.I. was introduced to Congress by the Future of A.I. Act in December 2017, before then there was no such thing as A.I. Governance. I am therefore one of the first practitioners in my field. The good news is we can all play a part in push back against racist A.I. A.I. bias is the civil rights battle of our time; we just have to educate ourselves around how these systems impact our daily lives, then move to serve our own best interests. A great example of this are the residents of San Francisco who banned the use of facial recognition technologies in public spaces because they did not want to be subject to constant government surveillance.

Mutale Nkonde is an A.I. Policy Advisor and was part of the team that introduced the Algorithmic Accountability Act to the House. She speaks widely on race, A.I., and Policy and her work has been discussed in MIT Tech Review and Wired. She recently published a piece on Medium that called for the need for critical public interest technology.