Multi-modal fusion attention sentiment analysis for mixed sentiment classification

IF 1.2 Q4 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Zhuanglin Xue, Jiabin Xu
{"title":"Multi-modal fusion attention sentiment analysis for mixed sentiment classification","authors":"Zhuanglin Xue,&nbsp;Jiabin Xu","doi":"10.1049/ccs2.12113","DOIUrl":null,"url":null,"abstract":"<p>Mixed sentiment classification (MSC) technology has a significant research value and application potential in understanding and analysing sentimental interactions. In the process of identifying and analysing complex sentiments, it is still necessary to overcome the difficulties of multi-dimensional sentiment recognition and improve sensitivity to subtle sentimental differences. Therefore, a multi-modal fusion attention sentiment analysis based on MSC to address this challenge is proposed. Firstly, the sentiment analysis fusion strategy based on multi-modal fusion is studied, which can fully utilise the information of multi-modal inputs such as text, audio, and video, thereby gaining a more comprehensive understanding and recognition of sentiments. Secondly, a sentiment analysis model based on multi-modal fusion attention is constructed, which focuses on the key information of multi-modal inputs to achieve an accurate recognition of mixed sentiments. The experimental results show that the proposed method outperforms existing sentiment analysis methods on both datasets, with F1 values of 83.17 and 84.19, accuracy of 39.15 and 39.98, and errors of 0.516 and 0.524, respectively. The accuracy range is 95.38%–99.89%, verifying the superiority of the method in sentiment analysis. It can be seen that this method provides a more effective and reliable MSC solution, which has practical significance for improving the accuracy and recall of sentiment analysis.</p>","PeriodicalId":33652,"journal":{"name":"Cognitive Computation and Systems","volume":"6 4","pages":"108-118"},"PeriodicalIF":1.2000,"publicationDate":"2024-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/ccs2.12113","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Cognitive Computation and Systems","FirstCategoryId":"1085","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1049/ccs2.12113","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

Mixed sentiment classification (MSC) technology has a significant research value and application potential in understanding and analysing sentimental interactions. In the process of identifying and analysing complex sentiments, it is still necessary to overcome the difficulties of multi-dimensional sentiment recognition and improve sensitivity to subtle sentimental differences. Therefore, a multi-modal fusion attention sentiment analysis based on MSC to address this challenge is proposed. Firstly, the sentiment analysis fusion strategy based on multi-modal fusion is studied, which can fully utilise the information of multi-modal inputs such as text, audio, and video, thereby gaining a more comprehensive understanding and recognition of sentiments. Secondly, a sentiment analysis model based on multi-modal fusion attention is constructed, which focuses on the key information of multi-modal inputs to achieve an accurate recognition of mixed sentiments. The experimental results show that the proposed method outperforms existing sentiment analysis methods on both datasets, with F1 values of 83.17 and 84.19, accuracy of 39.15 and 39.98, and errors of 0.516 and 0.524, respectively. The accuracy range is 95.38%–99.89%, verifying the superiority of the method in sentiment analysis. It can be seen that this method provides a more effective and reliable MSC solution, which has practical significance for improving the accuracy and recall of sentiment analysis.

Abstract Image

混合情感分类的多模态融合注意情感分析
混合情感分类技术在理解和分析情感互动方面具有重要的研究价值和应用潜力。在识别和分析复杂情感的过程中,仍然需要克服多维情感识别的困难,提高对微妙情感差异的敏感性。为此,提出了一种基于MSC的多模态融合注意情感分析方法来解决这一问题。首先,研究了基于多模态融合的情感分析融合策略,该策略可以充分利用文本、音频、视频等多模态输入信息,从而对情感进行更全面的理解和识别。其次,构建了基于多模态融合关注的情感分析模型,对多模态输入的关键信息进行关注,实现对混合情感的准确识别;实验结果表明,本文方法在两个数据集上均优于现有的情感分析方法,F1值分别为83.17和84.19,准确率分别为39.15和39.98,误差分别为0.516和0.524。准确率范围为95.38% ~ 99.89%,验证了该方法在情感分析中的优越性。可以看出,该方法提供了一种更加有效可靠的MSC解决方案,对于提高情感分析的准确率和召回率具有实际意义。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Cognitive Computation and Systems
Cognitive Computation and Systems Computer Science-Computer Science Applications
CiteScore
2.50
自引率
0.00%
发文量
39
审稿时长
10 weeks
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信