多模态情绪识别:整合语音和文本以改善效价、觉醒和优势预测

IF 2.2 4区 计算机科学 Q3 TELECOMMUNICATIONS
Messaoudi Awatef, Boughrara Hayet, Lachiri Zied
{"title":"多模态情绪识别:整合语音和文本以改善效价、觉醒和优势预测","authors":"Messaoudi Awatef,&nbsp;Boughrara Hayet,&nbsp;Lachiri Zied","doi":"10.1007/s12243-025-01069-1","DOIUrl":null,"url":null,"abstract":"<div><p>While speech emotion recognition has traditionally focused on classifying emotions into discrete categories like happy or angry, recent research has shifted towards a dimensional approach using the Valence-Arousal-Dominance model. This model captures the continuous emotional state. However, research in speech emotion recognition (SER) consistently shows lower performance in predicting valence compared to arousal and dominance. To improve performance, we propose a system that combines acoustic and linguistic information. This work explores a novel multimodal approach for emotion recognition that combines speech and text data. This fusion strategy aims to outperform the traditional single-modality systems. Both early and late fusion techniques are investigated in this paper. Our findings show that combining modalities in a late fusion approach enhances system performance. In this late fusion architecture, the outputs from the acoustic deep learning network and the linguistic network are fed into two stacked dense neural network (NN) layers to predict valence, arousal, and dominance as continuous values. This approach leads to a significant improvement in overall emotion recognition performance compared to prior methods.</p></div>","PeriodicalId":50761,"journal":{"name":"Annals of Telecommunications","volume":"80 and networking","pages":"401 - 415"},"PeriodicalIF":2.2000,"publicationDate":"2025-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Multimodal emotion recognition: integrating speech and text for improved valence, arousal, and dominance prediction\",\"authors\":\"Messaoudi Awatef,&nbsp;Boughrara Hayet,&nbsp;Lachiri Zied\",\"doi\":\"10.1007/s12243-025-01069-1\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>While speech emotion recognition has traditionally focused on classifying emotions into discrete categories like happy or angry, recent research has shifted towards a dimensional approach using the Valence-Arousal-Dominance model. This model captures the continuous emotional state. However, research in speech emotion recognition (SER) consistently shows lower performance in predicting valence compared to arousal and dominance. To improve performance, we propose a system that combines acoustic and linguistic information. This work explores a novel multimodal approach for emotion recognition that combines speech and text data. This fusion strategy aims to outperform the traditional single-modality systems. Both early and late fusion techniques are investigated in this paper. Our findings show that combining modalities in a late fusion approach enhances system performance. In this late fusion architecture, the outputs from the acoustic deep learning network and the linguistic network are fed into two stacked dense neural network (NN) layers to predict valence, arousal, and dominance as continuous values. This approach leads to a significant improvement in overall emotion recognition performance compared to prior methods.</p></div>\",\"PeriodicalId\":50761,\"journal\":{\"name\":\"Annals of Telecommunications\",\"volume\":\"80 and networking\",\"pages\":\"401 - 415\"},\"PeriodicalIF\":2.2000,\"publicationDate\":\"2025-02-12\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Annals of Telecommunications\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://link.springer.com/article/10.1007/s12243-025-01069-1\",\"RegionNum\":4,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"TELECOMMUNICATIONS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Annals of Telecommunications","FirstCategoryId":"94","ListUrlMain":"https://link.springer.com/article/10.1007/s12243-025-01069-1","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"TELECOMMUNICATIONS","Score":null,"Total":0}
引用次数: 0

摘要

虽然语音情感识别传统上专注于将情绪分为快乐或愤怒等离散类别,但最近的研究已经转向使用Valence-Arousal-Dominance模型的维度方法。这个模型捕捉了持续的情绪状态。然而,语音情绪识别(SER)的研究一直表明,与唤醒和支配相比,预测效价的效果较差。为了提高性能,我们提出了一个结合声学和语言信息的系统。这项工作探索了一种结合语音和文本数据的新的多模态情感识别方法。这种融合策略旨在超越传统的单模态系统。本文研究了早期和晚期融合技术。我们的研究结果表明,在后期融合方法中结合模式可以提高系统性能。在这种后期融合架构中,声学深度学习网络和语言网络的输出被输入到两个堆叠的密集神经网络(NN)层中,以连续值预测价态、唤醒和优势。与之前的方法相比,这种方法在整体情绪识别性能上有了显著的提高。
本文章由计算机程序翻译,如有差异,请以英文原文为准。

Multimodal emotion recognition: integrating speech and text for improved valence, arousal, and dominance prediction

Multimodal emotion recognition: integrating speech and text for improved valence, arousal, and dominance prediction

While speech emotion recognition has traditionally focused on classifying emotions into discrete categories like happy or angry, recent research has shifted towards a dimensional approach using the Valence-Arousal-Dominance model. This model captures the continuous emotional state. However, research in speech emotion recognition (SER) consistently shows lower performance in predicting valence compared to arousal and dominance. To improve performance, we propose a system that combines acoustic and linguistic information. This work explores a novel multimodal approach for emotion recognition that combines speech and text data. This fusion strategy aims to outperform the traditional single-modality systems. Both early and late fusion techniques are investigated in this paper. Our findings show that combining modalities in a late fusion approach enhances system performance. In this late fusion architecture, the outputs from the acoustic deep learning network and the linguistic network are fed into two stacked dense neural network (NN) layers to predict valence, arousal, and dominance as continuous values. This approach leads to a significant improvement in overall emotion recognition performance compared to prior methods.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Annals of Telecommunications
Annals of Telecommunications 工程技术-电信学
CiteScore
5.20
自引率
5.30%
发文量
37
审稿时长
4.5 months
期刊介绍: Annals of Telecommunications is an international journal publishing original peer-reviewed papers in the field of telecommunications. It covers all the essential branches of modern telecommunications, ranging from digital communications to communication networks and the internet, to software, protocols and services, uses and economics. This large spectrum of topics accounts for the rapid convergence through telecommunications of the underlying technologies in computers, communications, content management towards the emergence of the information and knowledge society. As a consequence, the Journal provides a medium for exchanging research results and technological achievements accomplished by the European and international scientific community from academia and industry.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信