Most aspects of life involve communicating with others—and being understood by those people as well. Many of us take this understanding for granted, but you can imagine the extreme difficulty and frustration you’d feel if people couldn’t easily understand the way you talk or express yourself. That’s the reality for millions of people living with speech impairments caused by neurologic conditions such as stroke, ALS, multiple sclerosis, traumatic brain injuries and Parkinson's.

To help solve this problem, the Project Euphonia team—part of our AI for Social Good program—is using AI to improve computers’ abilities to understand diverse speech patterns, such as impaired speech. We’ve partnered with the non-profit organizations ALS Therapy Development Institute (ALS TDI) and ALS Residence Initiative (ALSRI) to record the voices of people who have ALS, a neuro-degenerative condition that can result in the inability to speak and move. We collaborated closely with these groups to learn about the communication needs of people with ALS, and worked toward optimizing AI based algorithms so that mobile phones and computers can more reliably transcribe words spoken by people with these kinds of speech difficulties. To learn more about how our partnership with ALS TDI started, read this article from Senior Director, Clinical Operations Maeve McNally and ALS TDI Chief Scientific Officer Fernando Vieira.



