Diagnosing depression can be a fairly subjective endeavor, as it requires physicians and psychiatrists to rely on patients’ reports of symptomsincluding changes in sleep and appetite, low self-esteem, and a loss of interest in things that used to be enjoyable. Now, researchers report some more quantitative measures based on speech that could aid in diagnosing depression and measuring its severity.

Around one in 10 Americans suffers from depression at any time, according to Centers for Disease Control and Prevention statistics, and, in the worst cases, it can leave people with the illness unable to work, sleep, and enjoy life. Depression also has physical consequences in the form of impaired motor skills, coordination, and a general feeling of sluggishness. In recent years, that’s motivated a wide range of researchers to study different aspects of depression, including experts from disciplines as far afield as electrical engineering.

Around one in 10 Americans suffers from depression at any time, according to Centers for Disease Control and Prevention statistics, and, in the worst cases, it can leave people with the illness unable to work, sleep, and enjoy life.

Yes, electrical engineers. Building on the observation that depression interferes with our motor skills, Saurabh Sahu and Carol Espy-Wilson hypothesized that depression might affect our speech in fundamental ways. The pair focused on four basic acoustic properties: speaking rate, and three less-familiar quantities, breathiness, jitter and shimmer. In speech acoustics, breathiness is relatively high-frequency noise that results from the vocal cords being a bit too relaxed when speaking. Jitter tracks the average variation in the frequency of sound, while shimmer tracks variation in its amplitude—roughly speaking, its volume. The latter three traits are “source traits,” meaning that they’re related to muscles in the vocal cords, and haven’t been studied much before, Espy-Wilson writes in an email.

Sahu and Espy-Wilson measured those four properties in samples of people talking about their depression and focused on six individuals in particular whose depression had unambiguously subsided. (The audio samples came from a set of 35 that had been recorded by a separate lab for a related 2007 study of other, more readily apparent speech patterns, such as the number and duration of pauses between words and phrases.)

In keeping with other research on speech and depression, Sahu and Espy-Wilson found that four of those six people spoke a bit faster when their condition had improved. In addition, they found that jitter and shimmer went down—that is, the tone and volume of speech changed less frequently from moment to moment—in five of the six people as their depression eased. Breathiness declined in just three of the six.

Based on those results, Sahu and Espy-Wilson conclude, jitter and shimmer could be valuable indicators of a patient’s level of depression, though it will take a larger study and additional tests to see how well jitter and shimmer predict depression independent of a clinical diagnosis. “We have just shown that these parameters are relevant for the distinction. Our next step will be to build a classifier to see how well we are able to detect whether a speaker is depressed or not,” Espy-Wilson says.

The research will be presented Friday at the American Acoustical Society's fall meeting in Indianapolis.