DeLiang Wang was in college when his mother began to lose her hearing in the 1980s. Today, she struggles to listen to and participate in a conversation, even with her hearing aids. Dinners with her large family are frustrating and, often, exhausting.

At 91, she is “essentially deaf,” Wang said, because her hearing aids provide so little benefit that she seldom wears them. So Wang, now a professor computer science and engineering at Ohio State University, is building a better hearing aid, with some help from GPUs and deep learning.

More than 75 percent of people who need hearing aids don’t wear them. Their biggest frustration: Hearing aids don’t work well in noisy situations.

The Problem with Cocktail Parties

A person with normal hearing can distinguish between a voice and the simultaneous roar of a bus accelerating on the street outside. But a hearing aid cranks up the volume on both, creating an incomprehensible din, Wang said.

That din is known as the “cocktail party problem.” The human auditory system can naturally tune out music and conversations in the background to focus on one voice in a crowded room. Creating a hearing aid that mimics that ability has stumped scientists for decades.

Wang believes hearing aids should be as easy to use and effective as glasses are for anyone without 20/20 vision.

“I want people with hearing aids to be able to hear as well as someone without hearing loss,” he said.

How Sounds of Explosions Help

To improve hearing aids, Wang developed a deep learning program to separate speech from noise. As a first step, he and his team trained a neural network to use volume, frequency and other qualities of sound to tell the difference between speech and noise.

Next, researchers had to teach their neural network the sound of speech, as well as a range of background noises. This included a standard set of IEEE spoken sentences, sounds from a hospital cafeteria and 10,000 movie sound effects — everything from exploding bombs and breaking glass to everyday sounds you’d hear in a living room or kitchen.

To accelerate training, researchers used the CUDA parallel computing platform, NVIDIA TITAN X GPUs and cuDNN with the TensorFlow deep learning framework.

Comprehension Up 9X

After numerous rounds of training, Wang created a “digital filter” that isolates speech from background noise and automatically adjust the volumes of each separately.

The researchers tested their deep learning hearing aid software on a dozen people who wore hearing aids in both ears and a dozen without hearing impairment. Testers had to see how well each could understand speech against two types of background noise — babble noise and cafeteria noise.

Those with hearing loss saw dramatic increases in their ability to understand words obscured by noise. For some, comprehension leapt from just 10 percent to 90 percent.

Even people without hearing problems could better understand speech with noise in the background. (To learn more, see this paper by Wang and his team on training and testing the deep learning algorithm.)

“That means our program could someday help far more people than we originally anticipated,” Wang said.

Clearer Battlefield Communications

The deep learning hearing aid technology could also improve speech recognition on cellphones, help workers on noisy factory floors or equip soldiers so they can hear each other amid the the cacophony of battle, Wang said.

There’s more work to be done, but Wang said he keeps his mother in mind as he pushes the deep learning program to work in more environments and tests it on more people.

“My mother has been a constant inspiration,” he said.

To find out more about deep learning, listen to our AI Podcast on iTunes or Google Play Music or read our blog explaining this fast-growing branch of machine learning.