JMIR Publications Blog

The Power of Your Voice: How AI is Detecting Depression Severity

Written by Reviewed by Kayleigh-Ann Clegg, PhD | Jul 18, 2025 1:00:06 PM

Imagine being able to understand the severity of someone's depression just by listening to their voice. It might sound like something out of science fiction, but new research is bringing this possibility closer to reality.

Dr. Mike Aratow, co-founder and Chief Medical Officer of Ellipsis Health, recently co-authored a groundbreaking paper published in JMIR AI titled "Digital Phenotyping for Detecting Depression Severity in a Large Payor-Provider System: Retrospective Study of Speech and Language Model Performance." This study highlights a significant leap forward in using artificial intelligence to understand mental health.

 

Listening to Understand: AI and Depression Detection

The paper demonstrates how AI can analyze a patient's voice to detect the level of depression during real-world case management conversations. It's a unique study in its scope and scale, showcasing the AI model’s robust performance across diverse populations, including different ages, genders, socioeconomic statuses, and even various conversational contexts.

Why is this so important? For this kind of technology to truly help people in practical settings, it needs to work for everyone, regardless of their background or the complexity of their situation. This research shows that their AI model is up to that challenge.

Moving Beyond Traditional Methods

Traditionally, detecting and measuring depression often relies on self-report questionnaires or clinical interviews, which are labour-intensive and can sometimes be subjective or not capture the full picture. The use of speech as a digital biomarker offers a significant opportunity to transform and accelerate how we identify and treat depression.

In their study, Dr. Aratow and his team analyzed over 2,000 recordings of real-world case management calls. They used a machine learning model that examined both the semantic (meaning) and acoustic (sound) properties of speech. The results were compelling, demonstrating the model's strong performance in predicting depression severity across various demographics. This suggests that analyzing speech has potential to enhance treatment, improve clinical decision-making, and even enable truly personalized treatment recommendations.

Curious to delve deeper into this innovative use of AI in mental healthcare? Watch the video where Dr. Aratow explains his team's work, and read the full research article to explore the design, methods, and detailed results of this transformative study.

 

Subscribe Now