IBM launches tool aimed at detecting AI bias By Zoe Kleinman

Technology reporter, BBC News Published duration 19 September 2018

image copyright Getty Images image caption Algorithms and machine-learning could increasingly be used to calculate things like car insurance

IBM is launching a tool which will analyse how and why algorithms make decisions in real time.

The Fairness 360 Kit will also scan for signs of bias and recommend adjustments.

There is increasing concern that algorithms used by both tech giants and other firms are not always fair in their decision-making.

For example, in the past, image recognition systems have failed to identify non-white faces.

However, as they increasingly make automated decisions about a wide variety of issues such as policing, insurance and what information people see online, the implications of their recommendations become broader.

Often algorithms operate within what is known as a "black box" - meaning their owners can't see how they are making decisions.

The IBM cloud-based software will be open-source, and will work with a variety of commonly used frameworks for building algorithms.

Customers will be able to see, via a visual dashboard, how their algorithms are making decisions and which factors are being used in making the final recommendations.

image copyright IBM image caption An example of the IBM dashboard

It will also track the model's record for accuracy, performance and fairness over time.

"We are giving new transparency and control to the businesses who use AI and face the most potential risk from any flawed decision-making," said David Kenny, senior vice president of Cognitive Solutions.

Bias tools

Other tech firms are also working on solutions.

However, Google's tool does not operate in real time - the data can be used to build up a picture over time.

image copyright Google image caption An example of Google's What-If dashboard, showing faces identified as non-smiling in blue, and smiling in red, over time

Machine-learning and algorithmic, bias is becoming a significant issue in the AI community.

Part of the problem is that the vast amounts of data algorithms are trained on is not always sufficiently diverse.

image copyright Joy Buolamwini image caption Joy Buolamwini found her computer system recognised the white mask, but not her face.

Joy Buolamwini launched the Algorithmic Justice League (AJL) while a postgraduate student at the Massachusetts Institute of Technology in 2016 after discovering that facial recognition only spotted her face if she wore a white mask.

There is a growing debate surrounding artificial intelligence and ethics, said Kay Firth-Butterfield from the World Economic Forum.

"As a lawyer, some of the accountability questions of how do we find out what made [an] algorithm go wrong are going to be really interesting," she said in a recent interview with CNBC