The study tested five publicly available tools from Apple, Amazon, Google, IBM and Microsoft that anyone can use to build speech recognition services. These tools are not necessarily what Apple uses to build Siri or Amazon uses to build Alexa. But they may share underlying technology and practices with services like Siri and Alexa.

Each tool was tested last year, in late May and early June, and they may operate differently now. The study also points out that when the tools were tested, Apple’s tool was set up differently from the others and required some additional engineering before it could be tested.

Apple and Microsoft declined to comment on the study. An Amazon spokeswomen pointed to a web page where the company says it is constantly improving its speech recognition services. IBM did not respond to requests for comment.

Justin Burr, a Google spokesman, said the company was committed to improving accuracy. “We’ve been working on the challenge of accurately recognizing variations of speech for several years, and will continue to do so,” he said.

The researchers used these systems to transcribe interviews with 42 people who were white and 73 who were black. Then they compared the results from each group, showing a significantly higher error rate with the people who were black.