文本导向的不确定缺失模态情感分析重建网络

IF 9.8 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Piao Shi;Min Hu;Satoshi Nakagawa;Xiangming Zheng;Xuefeng Shi;Fuji Ren
{"title":"文本导向的不确定缺失模态情感分析重建网络","authors":"Piao Shi;Min Hu;Satoshi Nakagawa;Xiangming Zheng;Xuefeng Shi;Fuji Ren","doi":"10.1109/TAFFC.2025.3541743","DOIUrl":null,"url":null,"abstract":"Multimodal Sentiment Analysis (MSA) is an attractive research that aims to integrate sentiment expressed in textual, visual, and acoustic signals. There are two main problems in the existing methods: 1) the dominant role of the text is underutilization in unaligned multimodal data, and 2) the modality under uncertain missing feature is not sufficiently explored. This paper proposes a Text-guided Reconstruction Network (TgRN) for MSA with uncertain missing modalities in non-aligned sequences. The TgRN network includes three primary modules: Text-guided Extraction Module (TEM), Reconstruction Module (RM) and Text-guided Fusion Module (TFM). First, the TEM consists of the text-guided cross attention units and self-attention units to capture inter-modal features and intra-modal features, respectively. Second, leveraging enhanced attention units and a three-way squeeze-and-excitation block, the RM is designed to learn semantic information from incomplete data and reconstruct missing modality features. Third, the TFM utilizes a progressive modality-mixing adaptation gate to explore the dynamic correlations between nonverbal and verbal modalities, effectively addressing the modality gap issue. Finally, under the supervision of sentiment prediction loss and reconstruction loss, the TgRN effectively processes both uncertain missing-modality conditions and ideal complete modality conditions. Extensive experiments on CMU-MOSI and CH-SIMS demonstrate that our proposed method outperforms state-of-the-art approaches.","PeriodicalId":13131,"journal":{"name":"IEEE Transactions on Affective Computing","volume":"16 3","pages":"1825-1838"},"PeriodicalIF":9.8000,"publicationDate":"2025-02-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Text-Guided Reconstruction Network for Sentiment Analysis With Uncertain Missing Modalities\",\"authors\":\"Piao Shi;Min Hu;Satoshi Nakagawa;Xiangming Zheng;Xuefeng Shi;Fuji Ren\",\"doi\":\"10.1109/TAFFC.2025.3541743\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Multimodal Sentiment Analysis (MSA) is an attractive research that aims to integrate sentiment expressed in textual, visual, and acoustic signals. There are two main problems in the existing methods: 1) the dominant role of the text is underutilization in unaligned multimodal data, and 2) the modality under uncertain missing feature is not sufficiently explored. This paper proposes a Text-guided Reconstruction Network (TgRN) for MSA with uncertain missing modalities in non-aligned sequences. The TgRN network includes three primary modules: Text-guided Extraction Module (TEM), Reconstruction Module (RM) and Text-guided Fusion Module (TFM). First, the TEM consists of the text-guided cross attention units and self-attention units to capture inter-modal features and intra-modal features, respectively. Second, leveraging enhanced attention units and a three-way squeeze-and-excitation block, the RM is designed to learn semantic information from incomplete data and reconstruct missing modality features. Third, the TFM utilizes a progressive modality-mixing adaptation gate to explore the dynamic correlations between nonverbal and verbal modalities, effectively addressing the modality gap issue. Finally, under the supervision of sentiment prediction loss and reconstruction loss, the TgRN effectively processes both uncertain missing-modality conditions and ideal complete modality conditions. Extensive experiments on CMU-MOSI and CH-SIMS demonstrate that our proposed method outperforms state-of-the-art approaches.\",\"PeriodicalId\":13131,\"journal\":{\"name\":\"IEEE Transactions on Affective Computing\",\"volume\":\"16 3\",\"pages\":\"1825-1838\"},\"PeriodicalIF\":9.8000,\"publicationDate\":\"2025-02-13\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Transactions on Affective Computing\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10884915/\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Affective Computing","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10884915/","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

摘要

多模态情感分析(MSA)是一项有吸引力的研究,旨在整合文本、视觉和声音信号中表达的情感。现有方法存在两个主要问题:1)文本在未对齐多模态数据中的主导作用未得到充分利用;2)不确定缺失特征下的模态未得到充分挖掘。本文提出了一种文本引导重建网络(TgRN),用于非对齐序列中不确定缺失模态的MSA。TgRN网络包括三个主要模块:文本引导提取模块(TEM)、重建模块(RM)和文本引导融合模块(TFM)。首先,TEM由文本引导的交叉注意单元和自注意单元组成,分别捕捉模态间特征和模态内特征。其次,利用增强的注意单元和三向挤压和激励块,RM被设计用于从不完整数据中学习语义信息并重建缺失的模态特征。第三,TFM利用渐进式模态混合适应门来探索非语言和语言模态之间的动态相关性,有效地解决了模态差距问题。最后,在情感预测损失和重建损失的监督下,TgRN有效地处理了不确定缺失情态条件和理想完整情态条件。在CMU-MOSI和CH-SIMS上的大量实验表明,我们提出的方法优于最先进的方法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Text-Guided Reconstruction Network for Sentiment Analysis With Uncertain Missing Modalities
Multimodal Sentiment Analysis (MSA) is an attractive research that aims to integrate sentiment expressed in textual, visual, and acoustic signals. There are two main problems in the existing methods: 1) the dominant role of the text is underutilization in unaligned multimodal data, and 2) the modality under uncertain missing feature is not sufficiently explored. This paper proposes a Text-guided Reconstruction Network (TgRN) for MSA with uncertain missing modalities in non-aligned sequences. The TgRN network includes three primary modules: Text-guided Extraction Module (TEM), Reconstruction Module (RM) and Text-guided Fusion Module (TFM). First, the TEM consists of the text-guided cross attention units and self-attention units to capture inter-modal features and intra-modal features, respectively. Second, leveraging enhanced attention units and a three-way squeeze-and-excitation block, the RM is designed to learn semantic information from incomplete data and reconstruct missing modality features. Third, the TFM utilizes a progressive modality-mixing adaptation gate to explore the dynamic correlations between nonverbal and verbal modalities, effectively addressing the modality gap issue. Finally, under the supervision of sentiment prediction loss and reconstruction loss, the TgRN effectively processes both uncertain missing-modality conditions and ideal complete modality conditions. Extensive experiments on CMU-MOSI and CH-SIMS demonstrate that our proposed method outperforms state-of-the-art approaches.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
IEEE Transactions on Affective Computing
IEEE Transactions on Affective Computing COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE-COMPUTER SCIENCE, CYBERNETICS
CiteScore
15.00
自引率
6.20%
发文量
174
期刊介绍: The IEEE Transactions on Affective Computing is an international and interdisciplinary journal. Its primary goal is to share research findings on the development of systems capable of recognizing, interpreting, and simulating human emotions and related affective phenomena. The journal publishes original research on the underlying principles and theories that explain how and why affective factors shape human-technology interactions. It also focuses on how techniques for sensing and simulating affect can enhance our understanding of human emotions and processes. Additionally, the journal explores the design, implementation, and evaluation of systems that prioritize the consideration of affect in their usability. We also welcome surveys of existing work that provide new perspectives on the historical and future directions of this field.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信