[For more stories about the experiences and costs of war, sign up for the weekly At War newsletter.]

Post-traumatic stress disorder has long been one of the hardest mental health problems to diagnose because some patients try to hide symptoms while others exaggerate them. But a new voice analysis technique may be able to take the guesswork out of identifying the disorder using the same technology now used to dial home hands-free or order pizza on a smart speaker.

A team of researchers at New York University School of Medicine, working with SRI International, the nonprofit research institute that developed the smartphone assistant Siri, has created an algorithm that can analyze patient interviews, sort through tens of thousands of variables in their speech and identify minute auditory markers of PTSD that are otherwise imperceptible to the human ear, then make a diagnosis.

The results, published online on Monday in the journal Depression and Anxiety, show the algorithm was able to narrow down the 40,500 speech characteristics of a group of patients — like the tension in the larynx and the timing in the flick in the tongue — to just 18 relevant indicators that together could be used to diagnose PTSD. Based on those 18 speech clues, the algorithm was able to correctly identify patients with PTSD 89 percent of the time.

“They were not the speech features we thought,” said Dr. Charles Marmar, a psychiatry professor at N.Y.U. and one of the authors of the paper. “We thought the telling features would reflect agitated speech. In point of fact, when we saw the data, the features are flatter, more atonal speech. We were capturing the numbness that is so typical of PTSD patients.” As the process is refined, speech pattern analysis could become a widely used biomarker for objectively identifying the disorder, he said.