{"title":"Semi-Tensor Product based Multi-modal Fusion Method for Emotion Recognition","authors":"Fen Liu, Jianfeng Chen, Kemeng Li, Jisheng Bai","doi":"10.1109/ICSPCC55723.2022.9984246","DOIUrl":null,"url":null,"abstract":"Emotion recognition has been an important research topic in the field of human-computer interaction. Multi-modal emotion recognition makes full use of the complementarity of different modalities, which has greater advantages than single-modal emotion recognition. Traditional methods have low prediction performance and limited dimension of tensor fusion caused by inadequate multi-modal information fusion. In this paper, we proposed the semi-tensor product and attention based low-rank multi-modal fusion network (STALMF) for emotion recognition. We first use the semi-tensor product to effectively combine acoustic and language features. The self-attention module is then used for modeling the temporal dependencies of the fused features. Finally, the low-rank multi-modal fusion module is adopted to adequately fuse the information between the fused feature and the individual feature. We conducted our proposed method on the IEMOCAP dataset. The proposed method achieves an averaged F1 score of 82.4% and accuracy of 83.0%, outperforming the comparative methods. Experimental results show that the proposed method can effectively fuse multi-modal information by introducing the semi-tensor product, self-attention mechanism and low-rank multi-modal fusion module.","PeriodicalId":346917,"journal":{"name":"2022 IEEE International Conference on Signal Processing, Communications and Computing (ICSPCC)","volume":"75 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE International Conference on Signal Processing, Communications and Computing (ICSPCC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICSPCC55723.2022.9984246","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Emotion recognition has been an important research topic in the field of human-computer interaction. Multi-modal emotion recognition makes full use of the complementarity of different modalities, which has greater advantages than single-modal emotion recognition. Traditional methods have low prediction performance and limited dimension of tensor fusion caused by inadequate multi-modal information fusion. In this paper, we proposed the semi-tensor product and attention based low-rank multi-modal fusion network (STALMF) for emotion recognition. We first use the semi-tensor product to effectively combine acoustic and language features. The self-attention module is then used for modeling the temporal dependencies of the fused features. Finally, the low-rank multi-modal fusion module is adopted to adequately fuse the information between the fused feature and the individual feature. We conducted our proposed method on the IEMOCAP dataset. The proposed method achieves an averaged F1 score of 82.4% and accuracy of 83.0%, outperforming the comparative methods. Experimental results show that the proposed method can effectively fuse multi-modal information by introducing the semi-tensor product, self-attention mechanism and low-rank multi-modal fusion module.