{"title":"CSMF-SPC: Multimodal Sentiment Analysis Model with Effective Context Semantic Modality Fusion and Sentiment Polarity Correction","authors":"Yuqiang Li, Wenxuan Weng, Chun Liu, Lin Li","doi":"10.1007/s10044-024-01320-w","DOIUrl":null,"url":null,"abstract":"<p>Multimodal sentiment analysis focuses on the fusion of multiple modalities. However, modality representation learning is a key step for better modality fusion, so how to fully learn the sentiment information of non-text modalities is a problem worth exploring. In addition, how to further improve the accuracy of sentiment polarity prediction is also a work to be studied. To solve the above problems, we propose a multimodal sentiment analysis model with effective context semantic modality fusion and sentiment polarity correction (CSMF-SPC). Firstly, we design a low-rank multimodal fusion network based on context semantic modality (CSM-LRMFN). CSM-LRMFN uses the bi-directional long short-term memory network to extract the context semantic features of non-text modalities, and the BERT to extract the features of text modality. Then, CSM-LRMFN adopts a low-rank multimodal fusion method to fully extract the interaction information among modalities with contextual semantics. Different from previous studies, to improve the accuracy of sentiment polarity prediction, we design a weight self-adjusting sentiment polarity penalty loss function, which makes the model learn more sentiment features that are conducive to model prediction through backpropagation. Finally, a series of comparative experiments are conducted on the CMU-MOSI and CMU-MOSEI datasets. Compared with the current representative models, CSMF-SPC achieves better experimental results. Among them, the Acc-2 (including zero) metric is increased by 1.41% and 1.58% on the word-aligned and unaligned CMU-MOSI datasets respectively; it is improved by 1.50% and 2.14% respectively on the CMU-MOSEI dataset, which indicates that the improvement of CSMF-SPC is effective.</p>","PeriodicalId":54639,"journal":{"name":"Pattern Analysis and Applications","volume":"109 1","pages":""},"PeriodicalIF":3.7000,"publicationDate":"2024-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Pattern Analysis and Applications","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1007/s10044-024-01320-w","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Multimodal sentiment analysis focuses on the fusion of multiple modalities. However, modality representation learning is a key step for better modality fusion, so how to fully learn the sentiment information of non-text modalities is a problem worth exploring. In addition, how to further improve the accuracy of sentiment polarity prediction is also a work to be studied. To solve the above problems, we propose a multimodal sentiment analysis model with effective context semantic modality fusion and sentiment polarity correction (CSMF-SPC). Firstly, we design a low-rank multimodal fusion network based on context semantic modality (CSM-LRMFN). CSM-LRMFN uses the bi-directional long short-term memory network to extract the context semantic features of non-text modalities, and the BERT to extract the features of text modality. Then, CSM-LRMFN adopts a low-rank multimodal fusion method to fully extract the interaction information among modalities with contextual semantics. Different from previous studies, to improve the accuracy of sentiment polarity prediction, we design a weight self-adjusting sentiment polarity penalty loss function, which makes the model learn more sentiment features that are conducive to model prediction through backpropagation. Finally, a series of comparative experiments are conducted on the CMU-MOSI and CMU-MOSEI datasets. Compared with the current representative models, CSMF-SPC achieves better experimental results. Among them, the Acc-2 (including zero) metric is increased by 1.41% and 1.58% on the word-aligned and unaligned CMU-MOSI datasets respectively; it is improved by 1.50% and 2.14% respectively on the CMU-MOSEI dataset, which indicates that the improvement of CSMF-SPC is effective.
期刊介绍:
The journal publishes high quality articles in areas of fundamental research in intelligent pattern analysis and applications in computer science and engineering. It aims to provide a forum for original research which describes novel pattern analysis techniques and industrial applications of the current technology. In addition, the journal will also publish articles on pattern analysis applications in medical imaging. The journal solicits articles that detail new technology and methods for pattern recognition and analysis in applied domains including, but not limited to, computer vision and image processing, speech analysis, robotics, multimedia, document analysis, character recognition, knowledge engineering for pattern recognition, fractal analysis, and intelligent control. The journal publishes articles on the use of advanced pattern recognition and analysis methods including statistical techniques, neural networks, genetic algorithms, fuzzy pattern recognition, machine learning, and hardware implementations which are either relevant to the development of pattern analysis as a research area or detail novel pattern analysis applications. Papers proposing new classifier systems or their development, pattern analysis systems for real-time applications, fuzzy and temporal pattern recognition and uncertainty management in applied pattern recognition are particularly solicited.