Artificial intelligence is here to stay. Machines are getting smarter, faster, and are poised to play ever-greater roles in our healthcare, our education, our decision-making, our businesses, our news, and our governments.

Add Insight

to your inbox. We’ll send you one email a week with content you actually want to read, curated by the Insight team.

Humans stand to gain from AI in a number of ways. But AI also has the potential to replicate or exacerbate long-standing biases. As machine learning has matured beyond simpler task-based algorithms, it has come to rely more heavily on deep-learning architectures that pick up on relationships that no human could see or predict. These algorithms can be extraordinarily powerful, but they are also “black boxes” where the inputs and the outputs may be visible, but how exactly the two are related is not transparent. Given the algorithms’ very complexity, bias can creep into their outputs without their designers intending it to, or without them even knowing the bias is there. So perhaps it is unsurprising that many people are wary of the power vested in machine-learning algorithms. Inhi Cho Suh, General Manager, IBM Watson Customer Engagement, and Florian Zettelmeyer, a professor of marketing at Kellogg and chair of the school’s marketing department, are both invested in understanding how deep-learning algorithms can identify, account for, and reduce bias.

The pair discuss the social and ethical challenges machine learning poses, as well as the more general question of how developers and companies can go about building AI that is transparent, fair, and socially responsible.

This interview has been edited for length and clarity.

Florian ZETTELMEYER: So, let me kick it off with one example of bias in algorithms, which is the quality of face recognition. The subjects used to train the algorithm are vastly more likely to be nonminority than members of minorities. So as a result of that, the quality of facial recognition turns out to be better if you happen to look more conventionally Western than if you have some other ethnicity. Inhi Cho SUH: Yes, that’s one example of a bias because of a lack of data. Another really good example of this bias is in loan approval. If you look at the financial-services sector, there are fewer women-owned businesses. So therefore you may have loans being arbitrarily denied rather than approved because the lack of sufficient data adds too much uncertainty. ZETTELMEYER: You don’t want to approve a loan unless you have some level of certainty [in the accuracy of your algorithm], but a lack of data doesn’t allow you to make your statistical inputs good enough.

What do you think of the Microsoft bot example on Twitter [where the bot quickly mirrored other users’ sexist and racist language]? That’s another source of bias: it seems to be a case where an algorithm gets led astray because the people it is learning from are not very nice.

SUH: There are some societal and cultural norms that are more acceptable than others. For each of us as a person, we know and we learn the difference between what is and isn’t acceptable through experience. For an AI system, that’s going to require a tremendous amount of thoughtful training. Otherwise, it won’t pick up on the sarcasm. It’ll pick up on the wrong context in the wrong situation. ZETTELMEYER: That’s right. In some sense, we face this with our children: they live in a world that is full of profanity, but we would like them to not use that language. It’s very difficult. They need a set of value instructions—they can’t just be picking up everything from what’s around them. SUH: Absolutely. And Western culture is very different than Eastern culture, or Middle Eastern culture. So culture must be considered, and the value code [that the algorithm is trained with] has to be intentionally designed. And you do that by bringing in policymakers, academics, designers, and researchers who understand the user’s values in various contexts. ZETTELMEYER: I think there’s actually a larger point here that goes even beyond the notion of bias. I’m trained as an economist, and very often economics has not done a particularly good job at incorporating the notion of “values” into economic analysis. There’s this very strong sense of wanting to strive for efficiency, and as long as things are efficient, you can avoid considering whether the outcomes are beneficial to society. What I find interesting is that in this entire space of AI and analytics, the discussion around values is supercharged. I think it has to do with the fact that analytics and AI are very powerful weapons that can be used in very strategic, very targeted ways. And as a result of this, it seems absolutely crucial for an organization that chooses to implement these techniques to have a code of conduct or a set of values that governs these techniques. Right? I mean, just because you can do something doesn’t mean that you actually ought to do it.

Where you have these very powerful tools available that can really move things, you have an obligation to understand the larger impact. SUH: Accountability is one of the five areas that we are focusing on for creating trust in AI.

Many businesses are applying AI to not just create better experiences for consumers, but to monetize for profit. They may be doing it in ways where, say, data rights may not be balanced appropriately with the return on economic value, or efficiency. So it’s an important discussion: Who’s accountable when there are risks in addition to benefits?

ZETTELMEYER: Do you think this is new? SUH: I do a little bit, because in previous scenarios, business programs and applications were programmable. You had to put in the logic and rules [explicitly]. When you get into machine learning, you’re not going to have direct human intervention at every step. So then, what are the design principles that you intended?

“In this entire space of AI and analytics, the discussion around values is supercharged.” — Florian Zettelmeyer

Create business value through data science in our Executive Education program Leading with Advance Analytics and Artificial Intelligence.