Real-time speech emotion recognition using deep learning and data augmentation

IF 10.7 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Chawki Barhoumi, Yassine BenAyed
{"title":"Real-time speech emotion recognition using deep learning and data augmentation","authors":"Chawki Barhoumi,&nbsp;Yassine BenAyed","doi":"10.1007/s10462-024-11065-x","DOIUrl":null,"url":null,"abstract":"<div><p>In human–human interactions, detecting emotions is often easy as it can be perceived through facial expressions, body gestures, or speech. However, in human–machine interactions, detecting human emotion can be a challenge. To improve this interaction, Speech Emotion Recognition (SER) has emerged, with the goal of recognizing emotions solely through vocal intonation. In this work, we propose a SER system based on deep learning approaches and two efficient data augmentation techniques such as noise addition and spectrogram shifting. To evaluate the proposed system, we used three different datasets: TESS, EmoDB, and RAVDESS. We employe several algorithms such as Mel Frequency Cepstral Coefficients (MFCC), Zero Crossing Rate (ZCR), Mel spectrograms, Root Mean Square Value (RMS), and chroma to select the most appropriate vocal features that represent speech emotions. Three different deep learning models were imployed, including MultiLayer Perceptron (MLP), Convolutional Neural Network (CNN), and a hybrid model that combines CNN with Bidirectional Long-Short Term Memory (Bi-LSTM). By exploring these different approaches, we were able to identify the most effective model for accurately identifying emotional states from speech signals in real-time situation. Overall, our work demonstrates the effectiveness of the proposed deep learning model, specifically based on CNN+BiLSTM enhanced with data augmentation for the proposed real-time speech emotion recognition.</p></div>","PeriodicalId":8449,"journal":{"name":"Artificial Intelligence Review","volume":"58 2","pages":""},"PeriodicalIF":10.7000,"publicationDate":"2024-12-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10462-024-11065-x.pdf","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Artificial Intelligence Review","FirstCategoryId":"94","ListUrlMain":"https://link.springer.com/article/10.1007/s10462-024-11065-x","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

In human–human interactions, detecting emotions is often easy as it can be perceived through facial expressions, body gestures, or speech. However, in human–machine interactions, detecting human emotion can be a challenge. To improve this interaction, Speech Emotion Recognition (SER) has emerged, with the goal of recognizing emotions solely through vocal intonation. In this work, we propose a SER system based on deep learning approaches and two efficient data augmentation techniques such as noise addition and spectrogram shifting. To evaluate the proposed system, we used three different datasets: TESS, EmoDB, and RAVDESS. We employe several algorithms such as Mel Frequency Cepstral Coefficients (MFCC), Zero Crossing Rate (ZCR), Mel spectrograms, Root Mean Square Value (RMS), and chroma to select the most appropriate vocal features that represent speech emotions. Three different deep learning models were imployed, including MultiLayer Perceptron (MLP), Convolutional Neural Network (CNN), and a hybrid model that combines CNN with Bidirectional Long-Short Term Memory (Bi-LSTM). By exploring these different approaches, we were able to identify the most effective model for accurately identifying emotional states from speech signals in real-time situation. Overall, our work demonstrates the effectiveness of the proposed deep learning model, specifically based on CNN+BiLSTM enhanced with data augmentation for the proposed real-time speech emotion recognition.

在人与人的交互中,检测情绪通常很容易,因为情绪可以通过面部表情、肢体动作或语言来感知。然而,在人机交互中,检测人的情绪可能是一项挑战。为了改善这种交互,语音情感识别(SER)应运而生,其目标是仅通过声调来识别情感。在这项工作中,我们提出了一种基于深度学习方法和两种高效数据增强技术(如噪声添加和频谱图移动)的 SER 系统。为了评估所提出的系统,我们使用了三个不同的数据集:TESS、EmoDB 和 RAVDESS。我们采用了多种算法,如梅尔频率倒频谱系数(MFCC)、零交叉率(ZCR)、梅尔频谱图、均方根值(RMS)和色度,以选择最合适的代表语音情绪的声音特征。我们采用了三种不同的深度学习模型,包括多层感知器(MLP)、卷积神经网络(CNN)以及一种将 CNN 与双向长短期记忆(Bi-LSTM)相结合的混合模型。通过探索这些不同的方法,我们能够找出最有效的模型,以便从实时语音信号中准确识别情绪状态。总之,我们的工作证明了所提出的深度学习模型的有效性,特别是基于 CNN+BiLSTM 增强数据增强的模型在所提出的实时语音情感识别中的有效性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Artificial Intelligence Review
Artificial Intelligence Review 工程技术-计算机:人工智能
CiteScore
22.00
自引率
3.30%
发文量
194
审稿时长
5.3 months
期刊介绍: Artificial Intelligence Review, a fully open access journal, publishes cutting-edge research in artificial intelligence and cognitive science. It features critical evaluations of applications, techniques, and algorithms, providing a platform for both researchers and application developers. The journal includes refereed survey and tutorial articles, along with reviews and commentary on significant developments in the field.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信