Averages across the five companies show that the systems misidentified white people's words about 19 percent of the time. For black people, this figure jumped to 35 percent. Around two percent of audio from white people was considered unreadable, while this rose to 20 percent for black people.

The study suggests that these speech recognition systems could be flawed because their technology is not being trained on appropriately diverse data. The New York Times approached these tech companies for comment and only one -- Google -- responded, stating that "We've been working on the challenge of accurately recognizing variations of speech for several years, and will continue to do so."

The study is the latest to highlight the issue of bias in artificial intelligence. Analysts have found that facial recognition demonstrates both racial and gender bias, while separate tests have consistently shown how chatbots can quickly fall foul of sexist and racist behavior. Indeed, researchers last year warned that artificial intelligence is on the brink of a "diversity disaster."

As The New York Times notes, companies rolling out these systems are facing a "chicken-and-egg problem." If their services are used mainly by white people, they will face challenges in gathering data to serve black people, which will render their services unusable by anyone other than white people. Speaking to the publication, Noah Smith, a professor at the University of Washington, said that "Those feedback loops are kind of scary when you start thinking about them. That is a major concern."