FYI.

This story is over 5 years old.

Tech

Software That Diagnoses Depression By Analyzing Speech, Facial Expressions

A new algorithm offers depression diagnoses with 85 percent accuracy based on passive observation.

There are approximately one billion websites that are ready and eager to diagnose your depression, give or take a hundred thousand or so. All of them are based on similar sets of a couple dozen questions—Do you have difficulty falling asleep at night? Do you have trouble concentrating or remembering things?—and, while none of them will tell you explicitly that you have depression, they will advise as to whether or not it'd be good to go talk to an IRL doctor. But, as with most internet medical advice, you'd probably be better off just going to the IRL doctor in the first place. While said doctor may ask many of the same things, there is a great deal to a mental health assessment that is not self-reported: appearance, mood, behavior, reasoning, memory, ability to express oneself. For one thing, the IRL doctor is looking to exclude other possible diagnoses, while an online quiz only looks to include.

Advertisement

Software, however, may be on the edge of being able to offer a much deeper analyses, even what could reasonably be called a diagnosis. Stefan Scherer of the University of Southern California and Louis-Philippe Morency of Carnegie Mellon University are developing new methods of using computers to analyze behavior just through passive observation of facial expressions, eye movements, and, now, speech. Their latest work, published in the IEEE Transactions on Affective Computing, focuses on the latter. While the researchers have been able to come up with accurate diagnoses 75 percent of the time just by quantifying the durations and frequencies of smiles together with the frequency that a patient looks at the ground, they've found a way to increase that accuracy by 10 percent in adding a speech component—specifically, how the patient runs their vowels together as they speak ("vowel-space ratios").

Morency and Scherer took 250 subjects, some non-depressed and some previously diagnosed with depression, and analyzed their speech via a new algorithm. Based on this data, they were able to come up with vowel-space ratios for either group. The depressed group's ratio was about .49, while the non-depressed group's was about .55.

So what?

"The [vowel-space] measure captures the range and extremes of a speaker's vowel articulation and aims to capture assessments of both overall articulation as well as psychomotor retardation, a commonly found symptom of depression," the researchers explain. "While we expect that psychomotor retardation is correlated with the assessed vowel space measure further investigations are required to draw a direct link. Within the present study, we do not have access to diagnosis and expert assessments of psychomotor retardation, which we plan to accomplish in the near future." Morency and Scherer found similar correlations among PTSD patients (an average ratio of .51), but this may just be the result of a diagnostic overlap.

In any case, a depression diagnosis is a sometimes (often?) shaky thing, even with the patient right there in front of a likely overtaxed psychiatrist. Software might offer a reasonable way in, if not in making actual diagnoses than in offering a more robust screening system than Google.