For people who stutter, the convenience of voice assistant technology remains out of reach

Kevin Wheeler | USA TODAY

Show Caption Hide Caption Google, Siri and Alexa are not created equal Not all voice assistants can handle the same requests. We put Siri, Alexa and Google to the test.

Do you ever feel as if your voice assistants – whether Siri, Alexa, or Google – don’t understand you? You might repeat your question a little slower, a little louder, but eventually you’ll get the information you were asking for read back to you in the pleasing but lifeless tones of your voice-activated assistant.

But what if talking to your home tech weren’t an option? That’s the question facing many of the 3 million people in the United States who stutter, plus the thousands of others who have impaired speech not limited to stuttering, and many are feeling left out.

“When this stuff first started coming out, I was all over it,” said Jacquelyn Joyce Revere, a screenwriter from Los Angeles who stutters. “In LA, I need GPS all the time, so this seemed like a more convenient way to live the life I want to live.”

Revere said that since 2011, she has tried to use Siri on iPhone and Apple Watch, and Alexa through an Amazon Fire Stick. Though she continues to try using these voice assistants, her attempts often leave her disappointed.

“Every time I try to use it is another nail in the coffin, another reminder that this technology wasn’t made for me,” Revere said.

But it's not just Alexa and friends

Revere’s frustrations are not limited to voice assistants. Automated phone interfaces also pose problems. For instance, she said it is not uncommon for her to spend up to 40 minutes on hold, only to be dropped from a call by a machine operator when she can’t get any words out.

According to Jane Fraser, president of The Stuttering Foundation, phone interfaces have been a common problem within the stuttering community for years. She said The Stuttering Foundation receives hundreds of emails looking for help on how to deal with them, whereas newer technology has helped old problems surface in new ways.

“Overall tech has helped people who stutter, but when you try to tell it what you want, you get the same experience on the phone with a machine or another person – both hang up,” Fraser said.

Different types of stuttering are heard by voice assistants in different ways. Some of these forms of stuttering include prolongations, where sounds are stretched out; repetitions, when speakers repeat sounds or words; and blocks, which are the pauses that occur when a speaker can’t get a word out. Fraser said that people who stutter with blocks often have the most trouble with voice assistants and voice-activated phone interfaces.

This pilot has a stutter: But it disappears when he flies

Stigma of stuttering: Sarah Huckabee Sanders apologizes to Joe Biden over debate tweet about stuttering

“If there’s no voicing, there’s nothing to be heard by the voice assistant, so it shuts off or interrupts,” Fraser said.

Revere often stutters with blocks, and she said that they confound voice assistants more than any other kind of stuttering.

“If I have a block, they completely shut off,” Revere said, though she also noted that Alexa is a bit more patient than Siri.

Tayler Enyart, a paralegal from Elk Grove, California, is another Alexa and Siri user who stutters in blocks. Similarly to Revere, she has trouble using this technology and feels left out because of it.

“Using these technologies isn’t easy for me,” Enyart said. “I’ve tried them all, but they all cut me off. It’s really frustrating, so I just try to avoid them. A part of me feels really left out, but another part is like, ‘I’m used to this.’”

How Big Tech is responding

Apple, Amazon, and Google all have their own ways of providing accessibility for people who stutter or who have other speech disabilities. Amazon and Apple have released features that allow users to type commands to Alexa or Siri, such as Tap to Alexa with the Echo Show, or Type to Siri with iOS 11 in 2018.

Google is constantly updating its Google Assistant with new voice samples to better understand people with accents or speech disfluencies. And in 2019, the company announced a research project called Project Euphonia, whose goal is to eventually create a recognition model that can understand people with speech disabilities across all computer platforms.

The project, which was inspired by ALS patients, aims to collect enough audio samples to create a sound model that can predict and understand impaired speech patterns.

“There is a revolution going on with how people use voice to interact with computers, but some people are being left behind,” said Michael Brenner, a Harvard professor of applied mathematics and one of the researchers behind Project Euphonia.

According to Brenner, Project Euphonia's biggest challenge is a lack of data. Because voice recognition technology is trained to hear standard speech, Project Euphonia needs audio samples of impaired speech to train computers to be able to understand it as well. Ideally, Project Euphonia would have access to tens of millions of audio samples, creating a statistical model that can predict and understand the sounds of impaired speech, but that is an impossible standard.

“We don’t want to overpromise because we don’t know what is possible, but we want to help people,” Brenner said. “It would be ideal to have enough samples to have a general model, but we don’t. We want to find out how to do things that are useful with the speech we have.”

Hey, Google, Siri or Alexa: Which voice assistant handles these 100 questions best?

Voice assistants are listening to you: How to delete Siri, Alexa and Google recordings

Erich Reiter is a speech recognition engineer and speech pathologist who helped develop Siri when he worked for Nuance Communications. He has been keenly aware of the failures of voice recognition technology for people with impaired speech since 2012.

That year, a shallow water diving accident left one of Reiter’s friends quadriplegic, and he could only communicate through an augmentative and alternative communication (AAC) device, similar to what English theoretical physicist Stephen Hawking used. According to Reiter, it would take his friend three minutes to say something as simple as “I need water.”

This experience inspired Reiter to return to school to become a speech pathologist and eventually create voice recognition technology that helps people who stutter.

Reiter is now one of the founders of Say-IT Labs, a Brussels-based start-up that is making video games intended to help people who stutter practice effective speech techniques. Their goal, according to Reiter, is to make speech therapy more accessible.

“If you see a school SLP, chances are they have a caseload of about 60-80 students,” Reiter said. “That doesn’t give them enough time to work on real progress.”

Gaming for all: Video games are a 'great equalizer' for people with disabilities

Say-IT Labs is currently working on a game called Fluency Friends, where players control colorful cartoon characters through treacherous obstacles. Only instead of a joystick or a keyboard, the game will prompt players with words or sentences they must say to move through the level.

Similar to Project Euphonia, Say-IT Labs needs data to create an acoustic model for the game that understands stuttered speech. So far, according to Reiter, his team has collected samples from 50 participants.

Unlike Project Euphonia, however, Say-IT Labs requires much less data to work with because the sound of stuttered speech is less variable than the impaired speech found in patients with neurological disorders, according to Reiter.

“I hope that the kind of technology we’re creating will allow us to help educate SLPs and people who stutter, instead of just helping 50 people (through conventional therapy),” Reiter said. “For the 1% of the population who stutters, it would be a gift to reach 1,000.”

Despite efforts to help, voice assistants still remain inaccessible for many people who stutter. Hannah Arnold is a dental receptionist from Kent, Washington, who first tried using Siri and Alexa in 2013. Her first attempts came a little anxiously, as she wasn’t sure how the machines would respond to her stutter.

”I have trouble with Ws. It takes me a while to say them,” Arnold said. “So Siri would always interrupt me when I’d try to say, ‘Hey, Siri, what is the weather going to be like today?’”

Today, Arnold rarely uses voice assistants, but she sees how they could easily make her life better. For instance, instead of having to type from the screen on her Apple Watch, or ask for directions in her car, she could just ask Siri instead.

“It’s pretty hard because I feel like voice recognition – and talking to other technology – is expanding,” Arnold said. “But with advancements in new technology, I’m hopeful we’ll have more accommodations."