{"title":"TriSAT: Trimodal Representation Learning for Multimodal Sentiment Analysis","authors":"Ruohong Huan;Guowei Zhong;Peng Chen;Ronghua Liang","doi":"10.1109/TASLP.2024.3458812","DOIUrl":null,"url":null,"abstract":"Transformer-based multimodal sentiment analysis frameworks commonly facilitate cross-modal interactions between two modalities through the attention mechanism. However, such interactions prove inadequate when dealing with three or more modalities, leading to increased computational complexity and network redundancy. To address this challenge, this paper introduces a novel framework, Trimodal representations for Sentiment Analysis from Transformers (TriSAT), tailored for multimodal sentiment analysis. TriSAT incorporates a trimodal transformer featuring a module called Trimodal Multi-Head Attention (TMHA). TMHA considers language as the primary modality, combines information from language, video, and audio using a single computation, and analyzes sentiment from a trimodal perspective. This approach significantly reduces the computational complexity while delivering high performance. Moreover, we propose Attraction-Repulsion (AR) loss and Trimodal Supervised Contrastive (TSC) loss to further enhance sentiment analysis performance. We conduct experiments on three public datasets to evaluate TriSAT's performance, which consistently demonstrates its competitiveness compared to state-of-the-art approaches.","PeriodicalId":13332,"journal":{"name":"IEEE/ACM Transactions on Audio, Speech, and Language Processing","volume":"32 ","pages":"4105-4120"},"PeriodicalIF":4.1000,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE/ACM Transactions on Audio, Speech, and Language Processing","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10675444/","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ACOUSTICS","Score":null,"Total":0}
引用次数: 0
Abstract
Transformer-based multimodal sentiment analysis frameworks commonly facilitate cross-modal interactions between two modalities through the attention mechanism. However, such interactions prove inadequate when dealing with three or more modalities, leading to increased computational complexity and network redundancy. To address this challenge, this paper introduces a novel framework, Trimodal representations for Sentiment Analysis from Transformers (TriSAT), tailored for multimodal sentiment analysis. TriSAT incorporates a trimodal transformer featuring a module called Trimodal Multi-Head Attention (TMHA). TMHA considers language as the primary modality, combines information from language, video, and audio using a single computation, and analyzes sentiment from a trimodal perspective. This approach significantly reduces the computational complexity while delivering high performance. Moreover, we propose Attraction-Repulsion (AR) loss and Trimodal Supervised Contrastive (TSC) loss to further enhance sentiment analysis performance. We conduct experiments on three public datasets to evaluate TriSAT's performance, which consistently demonstrates its competitiveness compared to state-of-the-art approaches.
期刊介绍:
The IEEE/ACM Transactions on Audio, Speech, and Language Processing covers audio, speech and language processing and the sciences that support them. In audio processing: transducers, room acoustics, active sound control, human audition, analysis/synthesis/coding of music, and consumer audio. In speech processing: areas such as speech analysis, synthesis, coding, speech and speaker recognition, speech production and perception, and speech enhancement. In language processing: speech and text analysis, understanding, generation, dialog management, translation, summarization, question answering and document indexing and retrieval, as well as general language modeling.