Through Project Euphonia, Google partnered with ALS Therapy Development Institute (ALS TDI) and ALS Residence Initiative (ALSRI). The idea was that if friends and family of people with ALS can understand their loved ones, then Google could train computers to do the same. It simply needed to present its AI with enough examples of impaired speech patterns.

So, Google set out to record thousands of voice samples. One volunteer, Dimitri Kanevsky, a speech researcher at Google who learned English after becoming deaf as a child in Russia, recorded 15,000 phrases. Those were turned into spectrograms -- visual representations of sound -- and used to train the AI to understand Kanevsky.

This is still a work in progress, and for now, Google is working to bring it to people who speak English and have impairments typically associated with ALS. It's calling for volunteers, who can fill out a short form and record a set of phrases. Google also wants its AI to translate sounds and gestures into actions, such as speaking commands to Google Home or sending text messages. Eventually, it hopes to develop AI that can understand anyone, no matter how they communicate.