A new study of automated voice recognition systems from Amazon, Apple, Google, IBM, and Microsoft found that they have higher error rates when transcribing black people's voices than white people's.

Previous studies have shown that AI systems like facial recognition develop racial bias when trained on data sets of primarily white faces.

AI expert Sandra Wachter told Business Insider it's crucial we develop more diverse data sets alongside tools to allow courts to detect biased algorithms.

Visit Business Insider's homepage for more stories.

Voice recognition systems from big tech companies are ingrained with racial bias, a new study has found.

Published on Monday, the study scrutinized how voice recognition systems from Amazon, Apple, Google, IBM, and Microsoft fared when transcribing the voices of black and white people.

The researchers gave the voice-recognition systems almost 20 hours of interviews to transcribe from 42 white and 73 black interviewees. The average error rate for the white interviewees was 19%, whereas for the black interviewees it was 35%.

The study mirrors other research which has revealed: AI facial recognition can become racially-biased, more frequently misidentifying people with darker skin tones. This is due to the datasets being composed predominantly of white people. Microsoft's system performed the best overall, with a 15% error rate for white speakers and 27% for black speakers. Apple performed the worst, with a 45% error rate for black speakers and 23% for white speakers.

"Here are probably the five biggest companies doing speech recognition, and they are all making the same kind of mistake," John Rickford, one of the authors of the study, told The New York Times. The study notes that automated speech recognition is used to power virtual assistants like Siri and Alexa.

Gender disparities have also surfaced in some cases. In 2018 Amazon scrapped an AI hiring tool it had built when it realized that the tool was systematically discriminating against female applicants.

Business Insider contacted Amazon, Apple, Google, IBM, and Microsoft for comment.

"This is yet another example of sampling bias that demonstrates the discriminatory impact on certain communities," AI expert Sandra Wachter told Business Insider. "Compared to 'traditional' forms of discrimination ... automated discrimination is more abstract and unintuitive, subtle, intangible, and difficult to detect," she added.

According to Wachter, there are two ways to fight against such ingrained bias: diversify the dataset, and give courts the tools to both detect and punish when algorithms are perpetuating historic discrimination.

"This type of bias testing is essential. If we do not act now, we will not only exacerbate existing inequalities in our society but also make them less detectable," Wachter said.