{"title":"基于融合内模态和交叉模态特征的双峰语音情感识别","authors":"Samuel Kakuba, Dong Seog Han","doi":"10.1109/ICUFN57995.2023.10199790","DOIUrl":null,"url":null,"abstract":"The interactive speech between two or more inter locutors involves the text and acoustic modalities. These modalities consist of intra and cross-modality relationships at different time intervals which if modeled well, can avail emotionally rich cues for robust and accurate prediction of emotion states. This necessitates models that take into consideration long short-term dependency between the current, previous, and future time steps using multimodal approaches. Moreover, it is important to contextualize the interactive speech in order to accurately infer the emotional state. A combination of recurrent and/or convolutional neural networks with attention mechanisms is often used by researchers. In this paper, we propose a deep learning-based bimodal speech emotion recognition (DLBER) model that uses multi-level fusion to learn intra and cross-modality feature representations. The proposed DLBER model uses the transformer encoder to model the intra-modality features that are combined at the first level fusion in the local feature learning block (LFLB). We also use self-attentive bidirectional LSTM layers to further extract intramodality features before the second level fusion for further progressive learning of the cross-modality features. The resultant feature representation is fed into another self-attentive bidirectional LSTM layer in the global feature learning block (GFLB). The interactive emotional dyadic motion capture (IEMOCAP) dataset was used to evaluate the performance of the proposed DLBER model. The proposed DLBER model achieves 72.93% and 74.05% of F1 score and accuracy respectively.","PeriodicalId":341881,"journal":{"name":"2023 Fourteenth International Conference on Ubiquitous and Future Networks (ICUFN)","volume":"66 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Bimodal Speech Emotion Recognition using Fused Intra and Cross Modality Features\",\"authors\":\"Samuel Kakuba, Dong Seog Han\",\"doi\":\"10.1109/ICUFN57995.2023.10199790\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The interactive speech between two or more inter locutors involves the text and acoustic modalities. These modalities consist of intra and cross-modality relationships at different time intervals which if modeled well, can avail emotionally rich cues for robust and accurate prediction of emotion states. This necessitates models that take into consideration long short-term dependency between the current, previous, and future time steps using multimodal approaches. Moreover, it is important to contextualize the interactive speech in order to accurately infer the emotional state. A combination of recurrent and/or convolutional neural networks with attention mechanisms is often used by researchers. In this paper, we propose a deep learning-based bimodal speech emotion recognition (DLBER) model that uses multi-level fusion to learn intra and cross-modality feature representations. The proposed DLBER model uses the transformer encoder to model the intra-modality features that are combined at the first level fusion in the local feature learning block (LFLB). We also use self-attentive bidirectional LSTM layers to further extract intramodality features before the second level fusion for further progressive learning of the cross-modality features. The resultant feature representation is fed into another self-attentive bidirectional LSTM layer in the global feature learning block (GFLB). The interactive emotional dyadic motion capture (IEMOCAP) dataset was used to evaluate the performance of the proposed DLBER model. The proposed DLBER model achieves 72.93% and 74.05% of F1 score and accuracy respectively.\",\"PeriodicalId\":341881,\"journal\":{\"name\":\"2023 Fourteenth International Conference on Ubiquitous and Future Networks (ICUFN)\",\"volume\":\"66 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-07-04\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2023 Fourteenth International Conference on Ubiquitous and Future Networks (ICUFN)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICUFN57995.2023.10199790\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 Fourteenth International Conference on Ubiquitous and Future Networks (ICUFN)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICUFN57995.2023.10199790","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Bimodal Speech Emotion Recognition using Fused Intra and Cross Modality Features
The interactive speech between two or more inter locutors involves the text and acoustic modalities. These modalities consist of intra and cross-modality relationships at different time intervals which if modeled well, can avail emotionally rich cues for robust and accurate prediction of emotion states. This necessitates models that take into consideration long short-term dependency between the current, previous, and future time steps using multimodal approaches. Moreover, it is important to contextualize the interactive speech in order to accurately infer the emotional state. A combination of recurrent and/or convolutional neural networks with attention mechanisms is often used by researchers. In this paper, we propose a deep learning-based bimodal speech emotion recognition (DLBER) model that uses multi-level fusion to learn intra and cross-modality feature representations. The proposed DLBER model uses the transformer encoder to model the intra-modality features that are combined at the first level fusion in the local feature learning block (LFLB). We also use self-attentive bidirectional LSTM layers to further extract intramodality features before the second level fusion for further progressive learning of the cross-modality features. The resultant feature representation is fed into another self-attentive bidirectional LSTM layer in the global feature learning block (GFLB). The interactive emotional dyadic motion capture (IEMOCAP) dataset was used to evaluate the performance of the proposed DLBER model. The proposed DLBER model achieves 72.93% and 74.05% of F1 score and accuracy respectively.