Deep learning-based detection of depression by fusing auditory, visual and textual clues

IF 4.9 2区 医学 Q1 CLINICAL NEUROLOGY
Chenyang Xu , Yangbin Chen , Yanbao Tao , Wanqing Xie , Xiaofeng Liu , Yunhan Lin , Chunfeng Liang , Fan Du , Zhixiong Lin , Chuan Shi
{"title":"Deep learning-based detection of depression by fusing auditory, visual and textual clues","authors":"Chenyang Xu ,&nbsp;Yangbin Chen ,&nbsp;Yanbao Tao ,&nbsp;Wanqing Xie ,&nbsp;Xiaofeng Liu ,&nbsp;Yunhan Lin ,&nbsp;Chunfeng Liang ,&nbsp;Fan Du ,&nbsp;Zhixiong Lin ,&nbsp;Chuan Shi","doi":"10.1016/j.jad.2025.119860","DOIUrl":null,"url":null,"abstract":"<div><h3>Background</h3><div>Early detection of depression is crucial for implementing interventions. Deep learning-based computer vision (CV), semantic, and acoustic analysis have enabled the automated analysis of visual and auditory signals.</div></div><div><h3>Objective</h3><div>We proposed an automated depression detection model based on artificial intelligence (AI) that integrated visual, auditory, and textual clues. Moreover, we validated the model's performance in multiple scenarios, including interviews with chatbot.</div></div><div><h3>Methods</h3><div>A chatbot for depressive symptom inquiry powered by GPT-2.0 was developed. The brief affective interview task was designed as supplement. Audio-video and textual clues were captured during interview, and features from different modalities were fused using a multi-head cross-attention network. To validate the model's generalizability, we performed external validation with an independent dataset.</div></div><div><h3>Results</h3><div>(1)In the internal validation set (152 depression patients and 118 healthy controls), the multimodal model demonstrated strong predictive power for depression in all scenarios, with an area under the curve (AUC) exceeding 0.950 and an accuracy over 0.930. Under the symptomatic interview by chatbot scenario, the model showed exceptional performance, achieving an AUC of 0.999. Specificity decreases slightly (0.883) in the Brief Affective Interview Task. The multimodal model outperformed unimodal and bimodal counterparts. (2)For external validation under the symptomatic interview by chatbot scenario, a geographically distinct dataset (55 depression patients and 45 healthy controls) was employed. The multimodal fusion model achieved an AUC of 0.978, though all modality combinations exhibited reduced performance compared to internal validation.</div></div><div><h3>Limitations</h3><div>Longitudinal follow-up was not conducted in this study, and severe depression applicability requires further study.</div></div>","PeriodicalId":14963,"journal":{"name":"Journal of affective disorders","volume":"391 ","pages":"Article 119860"},"PeriodicalIF":4.9000,"publicationDate":"2025-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of affective disorders","FirstCategoryId":"3","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0165032725013023","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"CLINICAL NEUROLOGY","Score":null,"Total":0}
引用次数: 0

Abstract

Background

Early detection of depression is crucial for implementing interventions. Deep learning-based computer vision (CV), semantic, and acoustic analysis have enabled the automated analysis of visual and auditory signals.

Objective

We proposed an automated depression detection model based on artificial intelligence (AI) that integrated visual, auditory, and textual clues. Moreover, we validated the model's performance in multiple scenarios, including interviews with chatbot.

Methods

A chatbot for depressive symptom inquiry powered by GPT-2.0 was developed. The brief affective interview task was designed as supplement. Audio-video and textual clues were captured during interview, and features from different modalities were fused using a multi-head cross-attention network. To validate the model's generalizability, we performed external validation with an independent dataset.

Results

(1)In the internal validation set (152 depression patients and 118 healthy controls), the multimodal model demonstrated strong predictive power for depression in all scenarios, with an area under the curve (AUC) exceeding 0.950 and an accuracy over 0.930. Under the symptomatic interview by chatbot scenario, the model showed exceptional performance, achieving an AUC of 0.999. Specificity decreases slightly (0.883) in the Brief Affective Interview Task. The multimodal model outperformed unimodal and bimodal counterparts. (2)For external validation under the symptomatic interview by chatbot scenario, a geographically distinct dataset (55 depression patients and 45 healthy controls) was employed. The multimodal fusion model achieved an AUC of 0.978, though all modality combinations exhibited reduced performance compared to internal validation.

Limitations

Longitudinal follow-up was not conducted in this study, and severe depression applicability requires further study.
基于深度学习的抑郁症检测,融合听觉、视觉和文字线索。
背景:早期发现抑郁症对于实施干预措施至关重要。基于深度学习的计算机视觉(CV)、语义和声学分析使视觉和听觉信号的自动分析成为可能。目的:提出一种结合视觉、听觉和文字线索的基于人工智能(AI)的抑郁症自动检测模型。此外,我们在多个场景中验证了模型的性能,包括与聊天机器人的访谈。方法:开发基于GPT-2.0的抑郁症症状查询聊天机器人。设计了简短情感访谈任务作为补充。在访谈过程中捕捉音视频和文字线索,并利用具有多头交叉注意机制的网络融合不同模式的特征。为了验证模型的泛化性,我们使用独立的数据集进行了外部验证。结果:(1)在内部验证集(152例抑郁症患者和118例hc)中,多模态模型对所有情景的抑郁均有较好的预测能力,曲线下面积(AUC)大于0.950,准确率大于0.930。在聊天机器人对症访谈场景下,该模型取得了优异的表现,AUC为0.999。简短情感访谈任务的特异性略有下降(0.883)。(2)为了在聊天机器人情景的症状访谈下进行外部验证,使用了一个地理上不同的数据集(55名抑郁症患者和45名hc)。尽管与内部验证相比,所有模态组合的性能都有所降低,但多模态融合模型的AUC为0.978。局限性:本研究未进行纵向随访,重度抑郁症的适用性有待进一步研究。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Journal of affective disorders
Journal of affective disorders 医学-精神病学
CiteScore
10.90
自引率
6.10%
发文量
1319
审稿时长
9.3 weeks
期刊介绍: The Journal of Affective Disorders publishes papers concerned with affective disorders in the widest sense: depression, mania, mood spectrum, emotions and personality, anxiety and stress. It is interdisciplinary and aims to bring together different approaches for a diverse readership. Top quality papers will be accepted dealing with any aspect of affective disorders, including neuroimaging, cognitive neurosciences, genetics, molecular biology, experimental and clinical neurosciences, pharmacology, neuroimmunoendocrinology, intervention and treatment trials.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信