Qizhou Zhang, Qimeng Yang, Shengwei Tian, Long Yu, Xin Fan, Jinmiao Song
{"title":"Prototype-oriented multimodal emotion contrast-enhancer","authors":"Qizhou Zhang, Qimeng Yang, Shengwei Tian, Long Yu, Xin Fan, Jinmiao Song","doi":"10.1016/j.compeleceng.2025.110393","DOIUrl":null,"url":null,"abstract":"<div><div>Prototype learning has been proven effective and reliable for few-shot learning. Therefore, prototype learning can also do data enhancement work. Simultaneously, although CL(Contrastive Learning)-based methods can alleviate the data sparsity problem, they may amplify the noise in the original features. Recently, a series of outstanding models have emerged in multimodal sentiment analysis. However, the limited size of benchmark datasets in this field presents significant challenges for training models. To address this, we propose a prototype-contrast-enhanced approach for multimodal sentiment analysis. Our method combines contrastive learning with prototype learning, using improved contrastive learning to supervise the effectiveness of prototype learning and ensure the effectiveness of data augmentation. This method utilizes prototype learning to denoise features in contrastive and contrastive learning to supervise prototype performance. During the training phase, we generate prototyped representations as base classes. At the same time, the prototype representation of the training phase is supervised by contrastive loss. In the testing phase, these base classes augment samples, thereby assisting the model in accurately recognizing emotions. To evaluate our proposed method, we conduct experiments on widely used multimodal sentiment datasets, namely MOSI and MOSEI. The outcome of our extensive experiments confirms the significant effectiveness of our approach. We are making the code public at <span><span>https://github.com/925151505/MyCode</span><svg><path></path></svg></span></div></div>","PeriodicalId":50630,"journal":{"name":"Computers & Electrical Engineering","volume":"124 ","pages":"Article 110393"},"PeriodicalIF":4.0000,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computers & Electrical Engineering","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0045790625003362","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE","Score":null,"Total":0}
引用次数: 0
Abstract
Prototype learning has been proven effective and reliable for few-shot learning. Therefore, prototype learning can also do data enhancement work. Simultaneously, although CL(Contrastive Learning)-based methods can alleviate the data sparsity problem, they may amplify the noise in the original features. Recently, a series of outstanding models have emerged in multimodal sentiment analysis. However, the limited size of benchmark datasets in this field presents significant challenges for training models. To address this, we propose a prototype-contrast-enhanced approach for multimodal sentiment analysis. Our method combines contrastive learning with prototype learning, using improved contrastive learning to supervise the effectiveness of prototype learning and ensure the effectiveness of data augmentation. This method utilizes prototype learning to denoise features in contrastive and contrastive learning to supervise prototype performance. During the training phase, we generate prototyped representations as base classes. At the same time, the prototype representation of the training phase is supervised by contrastive loss. In the testing phase, these base classes augment samples, thereby assisting the model in accurately recognizing emotions. To evaluate our proposed method, we conduct experiments on widely used multimodal sentiment datasets, namely MOSI and MOSEI. The outcome of our extensive experiments confirms the significant effectiveness of our approach. We are making the code public at https://github.com/925151505/MyCode
期刊介绍:
The impact of computers has nowhere been more revolutionary than in electrical engineering. The design, analysis, and operation of electrical and electronic systems are now dominated by computers, a transformation that has been motivated by the natural ease of interface between computers and electrical systems, and the promise of spectacular improvements in speed and efficiency.
Published since 1973, Computers & Electrical Engineering provides rapid publication of topical research into the integration of computer technology and computational techniques with electrical and electronic systems. The journal publishes papers featuring novel implementations of computers and computational techniques in areas like signal and image processing, high-performance computing, parallel processing, and communications. Special attention will be paid to papers describing innovative architectures, algorithms, and software tools.