Evaluation of Fusion Techniques for Multi-modal Sentiment Analysis

Rishabh Shinde, Pallavi Udatewar, Amruta Nandargi, Siddarth Mohan, Ranjana Agrawal, Pankaj Nirale
{"title":"Evaluation of Fusion Techniques for Multi-modal Sentiment Analysis","authors":"Rishabh Shinde, Pallavi Udatewar, Amruta Nandargi, Siddarth Mohan, Ranjana Agrawal, Pankaj Nirale","doi":"10.1109/ASSIC55218.2022.10088291","DOIUrl":null,"url":null,"abstract":"Sentiment Analysis a subset of Affective Computing is often categorized as a Natural Language Processing task and is restricted to the textual modality. Since the world around us is multimodal, i.e., we see things, listen to sounds, and feel the various textures of objects, sentiment analysis must be applied to the different modalities present in our daily lives. In this paper, we have implemented sentiment analysis on the following two modalities - text and image. The study compares the performance of individual single-modal models to the performance of a multimodal model for the task of sentiment analysis. This study employs the use of a functional RNN model for textual sentiment analysis and a functional CNN model for visual sentiment analysis. Multimodality is achieved by performing fusion. Additionally, a comparison of two types of fusion is explored, namely Intermediate fusion and Late fusion. There is an improvement from previous studies that is evident from the experimental results where our fusion model gives an accuracy of 79.63%. The promising results from the study will prove to be helpful for budding researchers in exploring prospects in the field of multimodality and affective domain.","PeriodicalId":441406,"journal":{"name":"2022 International Conference on Advancements in Smart, Secure and Intelligent Computing (ASSIC)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 International Conference on Advancements in Smart, Secure and Intelligent Computing (ASSIC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ASSIC55218.2022.10088291","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Sentiment Analysis a subset of Affective Computing is often categorized as a Natural Language Processing task and is restricted to the textual modality. Since the world around us is multimodal, i.e., we see things, listen to sounds, and feel the various textures of objects, sentiment analysis must be applied to the different modalities present in our daily lives. In this paper, we have implemented sentiment analysis on the following two modalities - text and image. The study compares the performance of individual single-modal models to the performance of a multimodal model for the task of sentiment analysis. This study employs the use of a functional RNN model for textual sentiment analysis and a functional CNN model for visual sentiment analysis. Multimodality is achieved by performing fusion. Additionally, a comparison of two types of fusion is explored, namely Intermediate fusion and Late fusion. There is an improvement from previous studies that is evident from the experimental results where our fusion model gives an accuracy of 79.63%. The promising results from the study will prove to be helpful for budding researchers in exploring prospects in the field of multimodality and affective domain.
多模态情感分析的融合技术评价
情感分析是情感计算的一个子集,通常被归类为自然语言处理任务,并且仅限于文本情态。因为我们周围的世界是多模态的,也就是说,我们看东西,听声音,感觉物体的各种纹理,情感分析必须应用于我们日常生活中出现的不同模态。在本文中,我们对以下两种模式-文本和图像进行了情感分析。该研究比较了单个单模态模型和多模态模型在情感分析任务中的表现。本研究使用功能RNN模型进行文本情感分析,使用功能CNN模型进行视觉情感分析。多模态是通过融合实现的。此外,对两种类型的融合进行了比较,即中期融合和晚期融合。从实验结果中可以明显看出,我们的融合模型的精度为79.63%,这比以前的研究有了明显的改进。本研究的结果将有助于新研究者探索多模态和情感领域的前景。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信