Huihui Song, Jianping Li, Zhiping Xia, Zongping Yang, Xiao Du
{"title":"Multimodal Sentiment Analysis Based on Pre-LN Transformer Interaction","authors":"Huihui Song, Jianping Li, Zhiping Xia, Zongping Yang, Xiao Du","doi":"10.1109/ITOEC53115.2022.9734328","DOIUrl":null,"url":null,"abstract":"Multimodal sentiment analysis aims to extract and integrate semantic information collected from multimodal data to identify the information and emotions expressed in multimodal data. The main focus of this area of research is to develop an extraordinary fusion solution that can extract and integrate key information from a variety of patterns. In view of the problems of the existing model, such as weak parallel computing ability and insufficient remote dependence processing, this paper proposes a cross-modal contextual interaction model (CMCI-PLNT) based on Pre-LN Transformer to carry out the information interaction between language, audio and video, and uses the self-attention module to filter redundant information. Finally, the residual network was used to fuse the information and perform emotion analysis. The core of the model is directed pairwise cross-modal attention, which focuses on the interaction between multi-modal sequences at different time steps. Our model achieves 81.2% accuracy on the MOSI dataset and 81.5% accuracy on the MOSEI dataset, The experiment shows the feasibility and effectiveness of the model in this paper.","PeriodicalId":127300,"journal":{"name":"2022 IEEE 6th Information Technology and Mechatronics Engineering Conference (ITOEC)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE 6th Information Technology and Mechatronics Engineering Conference (ITOEC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ITOEC53115.2022.9734328","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Multimodal sentiment analysis aims to extract and integrate semantic information collected from multimodal data to identify the information and emotions expressed in multimodal data. The main focus of this area of research is to develop an extraordinary fusion solution that can extract and integrate key information from a variety of patterns. In view of the problems of the existing model, such as weak parallel computing ability and insufficient remote dependence processing, this paper proposes a cross-modal contextual interaction model (CMCI-PLNT) based on Pre-LN Transformer to carry out the information interaction between language, audio and video, and uses the self-attention module to filter redundant information. Finally, the residual network was used to fuse the information and perform emotion analysis. The core of the model is directed pairwise cross-modal attention, which focuses on the interaction between multi-modal sequences at different time steps. Our model achieves 81.2% accuracy on the MOSI dataset and 81.5% accuracy on the MOSEI dataset, The experiment shows the feasibility and effectiveness of the model in this paper.