Interpretable Multimodal Tucker Fusion Model With Information Filtering for Multimodal Sentiment Analysis

IF 4.5 2区 计算机科学 Q1 COMPUTER SCIENCE, CYBERNETICS
Xin Nie;Laurence T. Yang;Zhe Li;Xianjun Deng;Fulan Fan;Zecan Yang
{"title":"Interpretable Multimodal Tucker Fusion Model With Information Filtering for Multimodal Sentiment Analysis","authors":"Xin Nie;Laurence T. Yang;Zhe Li;Xianjun Deng;Fulan Fan;Zecan Yang","doi":"10.1109/TCSS.2024.3459929","DOIUrl":null,"url":null,"abstract":"Multimodal sentiment analysis (MSA) integrates multiple sources of sentiment information for processing and has demonstrated superior performance compared to single-modal sentiment analysis, making it widely applicable in domains such as human–computer interaction and public opinion supervision. However, current MSA models heavily rely on black-box deep learning (DL) methods, which lack interpretability. Additionally, effectively integrating multimodal data, reducing noise and redundancy, as well as bridging the semantic gap between heterogeneous data remain challenging issues in multimodal DL. To address these challenges, we propose an interpretable multimodal Tucker fusion model with information filtering (IMTFMIF). We are the first to utilize the multimodal Tucker fusion model for MSA tasks. This approach maps multimodal data into a unified tensor space for fusion, effectively reducing modal heterogeneity and eliminating redundant information while maintaining interpretability. Furthermore, mutual information is employed to filter out task-irrelevant information and explain the association between input and output from an information flow perspective. We propose a novel approach to enhance the comprehension of multimodal data and optimize model performance in MSA tasks. Finally, extensive experiments conducted on three public multimodal datasets demonstrate that our proposed IMTFMIF achieves competitive performance compared to state-of-the-art methods.","PeriodicalId":13044,"journal":{"name":"IEEE Transactions on Computational Social Systems","volume":"12 3","pages":"1351-1364"},"PeriodicalIF":4.5000,"publicationDate":"2024-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Computational Social Systems","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10813576/","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, CYBERNETICS","Score":null,"Total":0}
引用次数: 0

Abstract

Multimodal sentiment analysis (MSA) integrates multiple sources of sentiment information for processing and has demonstrated superior performance compared to single-modal sentiment analysis, making it widely applicable in domains such as human–computer interaction and public opinion supervision. However, current MSA models heavily rely on black-box deep learning (DL) methods, which lack interpretability. Additionally, effectively integrating multimodal data, reducing noise and redundancy, as well as bridging the semantic gap between heterogeneous data remain challenging issues in multimodal DL. To address these challenges, we propose an interpretable multimodal Tucker fusion model with information filtering (IMTFMIF). We are the first to utilize the multimodal Tucker fusion model for MSA tasks. This approach maps multimodal data into a unified tensor space for fusion, effectively reducing modal heterogeneity and eliminating redundant information while maintaining interpretability. Furthermore, mutual information is employed to filter out task-irrelevant information and explain the association between input and output from an information flow perspective. We propose a novel approach to enhance the comprehension of multimodal data and optimize model performance in MSA tasks. Finally, extensive experiments conducted on three public multimodal datasets demonstrate that our proposed IMTFMIF achieves competitive performance compared to state-of-the-art methods.
基于信息过滤的可解释多模态Tucker融合模型用于多模态情感分析
多模态情感分析(MSA)集成了多种来源的情感信息进行处理,与单模态情感分析相比,它表现出了优越的性能,在人机交互和舆情监督等领域得到了广泛的应用。然而,目前的MSA模型严重依赖于缺乏可解释性的黑盒深度学习(DL)方法。此外,有效地集成多模态数据,减少噪声和冗余,以及弥合异构数据之间的语义差距仍然是多模态深度学习中具有挑战性的问题。为了解决这些问题,我们提出了一种可解释的多模态Tucker融合信息过滤模型(IMTFMIF)。我们是第一个利用多模态Tucker融合模型的MSA任务。该方法将多模态数据映射到统一的张量空间进行融合,在保持可解释性的同时,有效地减少了模态异质性,消除了冗余信息。此外,互信息用于过滤任务无关信息,并从信息流的角度解释输入和输出之间的关联。我们提出了一种新的方法来增强对多模态数据的理解并优化MSA任务中的模型性能。最后,在三个公共多模态数据集上进行的大量实验表明,与最先进的方法相比,我们提出的IMTFMIF实现了具有竞争力的性能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
IEEE Transactions on Computational Social Systems
IEEE Transactions on Computational Social Systems Social Sciences-Social Sciences (miscellaneous)
CiteScore
10.00
自引率
20.00%
发文量
316
期刊介绍: IEEE Transactions on Computational Social Systems focuses on such topics as modeling, simulation, analysis and understanding of social systems from the quantitative and/or computational perspective. "Systems" include man-man, man-machine and machine-machine organizations and adversarial situations as well as social media structures and their dynamics. More specifically, the proposed transactions publishes articles on modeling the dynamics of social systems, methodologies for incorporating and representing socio-cultural and behavioral aspects in computational modeling, analysis of social system behavior and structure, and paradigms for social systems modeling and simulation. The journal also features articles on social network dynamics, social intelligence and cognition, social systems design and architectures, socio-cultural modeling and representation, and computational behavior modeling, and their applications.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信