Ziqiang Chen, Dandan Wang, Liangliang Lou, Shiqing Zhang, Xiaoming Zhao, Shuqiang Jiang, Jun Yu, Jun Xiao
{"title":"基于跨模态特征重构和分解的文本引导多模态凹陷检测","authors":"Ziqiang Chen, Dandan Wang, Liangliang Lou, Shiqing Zhang, Xiaoming Zhao, Shuqiang Jiang, Jun Yu, Jun Xiao","doi":"10.1016/j.inffus.2024.102861","DOIUrl":null,"url":null,"abstract":"Depression, a widespread and debilitating mental health disorder, requires early detection to facilitate effective intervention. Automated depression detection integrating audio with text modalities is a challenging yet significant issue due to the information redundancy and inter-modal heterogeneity across modalities. Prior works usually fail to fully learn the interaction of audio–text modalities for depression detection in an explicit manner. To address these issues, this work proposes a novel text-guided multimdoal depression detection method based on a cross-modal feature reconstruction and decomposition framework. The proposed method takes the text modality as the core modality to guide the model to reconstruct comprehensive audio features for cross-modal feature decomposition tasks. Moreover, the designed cross-modal feature reconstruction and decomposition framework aims to disentangle the shared and private features from the text-guided reconstructed comprehensive audio features for subsequent multimodal fusion. Besides, a bi-directional cross-attention module is designed to interactively learn simultaneous and mutual correlations across modalities for feature enhancement. Extensive experiments are performed on the DAIC-WoZ and E-DAIC datasets, and the results show the superiority of the proposed method on multimodal depression detection tasks, outperforming the state-of-the-arts.","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":"65 1","pages":""},"PeriodicalIF":14.7000,"publicationDate":"2024-12-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Text-guided multimodal depression detection via cross-modal feature reconstruction and decomposition\",\"authors\":\"Ziqiang Chen, Dandan Wang, Liangliang Lou, Shiqing Zhang, Xiaoming Zhao, Shuqiang Jiang, Jun Yu, Jun Xiao\",\"doi\":\"10.1016/j.inffus.2024.102861\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Depression, a widespread and debilitating mental health disorder, requires early detection to facilitate effective intervention. Automated depression detection integrating audio with text modalities is a challenging yet significant issue due to the information redundancy and inter-modal heterogeneity across modalities. Prior works usually fail to fully learn the interaction of audio–text modalities for depression detection in an explicit manner. To address these issues, this work proposes a novel text-guided multimdoal depression detection method based on a cross-modal feature reconstruction and decomposition framework. The proposed method takes the text modality as the core modality to guide the model to reconstruct comprehensive audio features for cross-modal feature decomposition tasks. Moreover, the designed cross-modal feature reconstruction and decomposition framework aims to disentangle the shared and private features from the text-guided reconstructed comprehensive audio features for subsequent multimodal fusion. Besides, a bi-directional cross-attention module is designed to interactively learn simultaneous and mutual correlations across modalities for feature enhancement. Extensive experiments are performed on the DAIC-WoZ and E-DAIC datasets, and the results show the superiority of the proposed method on multimodal depression detection tasks, outperforming the state-of-the-arts.\",\"PeriodicalId\":50367,\"journal\":{\"name\":\"Information Fusion\",\"volume\":\"65 1\",\"pages\":\"\"},\"PeriodicalIF\":14.7000,\"publicationDate\":\"2024-12-22\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Information Fusion\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://doi.org/10.1016/j.inffus.2024.102861\",\"RegionNum\":1,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Information Fusion","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1016/j.inffus.2024.102861","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
Text-guided multimodal depression detection via cross-modal feature reconstruction and decomposition
Depression, a widespread and debilitating mental health disorder, requires early detection to facilitate effective intervention. Automated depression detection integrating audio with text modalities is a challenging yet significant issue due to the information redundancy and inter-modal heterogeneity across modalities. Prior works usually fail to fully learn the interaction of audio–text modalities for depression detection in an explicit manner. To address these issues, this work proposes a novel text-guided multimdoal depression detection method based on a cross-modal feature reconstruction and decomposition framework. The proposed method takes the text modality as the core modality to guide the model to reconstruct comprehensive audio features for cross-modal feature decomposition tasks. Moreover, the designed cross-modal feature reconstruction and decomposition framework aims to disentangle the shared and private features from the text-guided reconstructed comprehensive audio features for subsequent multimodal fusion. Besides, a bi-directional cross-attention module is designed to interactively learn simultaneous and mutual correlations across modalities for feature enhancement. Extensive experiments are performed on the DAIC-WoZ and E-DAIC datasets, and the results show the superiority of the proposed method on multimodal depression detection tasks, outperforming the state-of-the-arts.
期刊介绍:
Information Fusion serves as a central platform for showcasing advancements in multi-sensor, multi-source, multi-process information fusion, fostering collaboration among diverse disciplines driving its progress. It is the leading outlet for sharing research and development in this field, focusing on architectures, algorithms, and applications. Papers dealing with fundamental theoretical analyses as well as those demonstrating their application to real-world problems will be welcome.