{"title":"使用深度学习和数据增强的实时语音情感识别","authors":"Chawki Barhoumi, Yassine BenAyed","doi":"10.1007/s10462-024-11065-x","DOIUrl":null,"url":null,"abstract":"<div><p>In human–human interactions, detecting emotions is often easy as it can be perceived through facial expressions, body gestures, or speech. However, in human–machine interactions, detecting human emotion can be a challenge. To improve this interaction, Speech Emotion Recognition (SER) has emerged, with the goal of recognizing emotions solely through vocal intonation. In this work, we propose a SER system based on deep learning approaches and two efficient data augmentation techniques such as noise addition and spectrogram shifting. To evaluate the proposed system, we used three different datasets: TESS, EmoDB, and RAVDESS. We employe several algorithms such as Mel Frequency Cepstral Coefficients (MFCC), Zero Crossing Rate (ZCR), Mel spectrograms, Root Mean Square Value (RMS), and chroma to select the most appropriate vocal features that represent speech emotions. Three different deep learning models were imployed, including MultiLayer Perceptron (MLP), Convolutional Neural Network (CNN), and a hybrid model that combines CNN with Bidirectional Long-Short Term Memory (Bi-LSTM). By exploring these different approaches, we were able to identify the most effective model for accurately identifying emotional states from speech signals in real-time situation. Overall, our work demonstrates the effectiveness of the proposed deep learning model, specifically based on CNN+BiLSTM enhanced with data augmentation for the proposed real-time speech emotion recognition.</p></div>","PeriodicalId":8449,"journal":{"name":"Artificial Intelligence Review","volume":"58 2","pages":""},"PeriodicalIF":10.7000,"publicationDate":"2024-12-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10462-024-11065-x.pdf","citationCount":"0","resultStr":"{\"title\":\"Real-time speech emotion recognition using deep learning and data augmentation\",\"authors\":\"Chawki Barhoumi, Yassine BenAyed\",\"doi\":\"10.1007/s10462-024-11065-x\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>In human–human interactions, detecting emotions is often easy as it can be perceived through facial expressions, body gestures, or speech. However, in human–machine interactions, detecting human emotion can be a challenge. To improve this interaction, Speech Emotion Recognition (SER) has emerged, with the goal of recognizing emotions solely through vocal intonation. In this work, we propose a SER system based on deep learning approaches and two efficient data augmentation techniques such as noise addition and spectrogram shifting. To evaluate the proposed system, we used three different datasets: TESS, EmoDB, and RAVDESS. We employe several algorithms such as Mel Frequency Cepstral Coefficients (MFCC), Zero Crossing Rate (ZCR), Mel spectrograms, Root Mean Square Value (RMS), and chroma to select the most appropriate vocal features that represent speech emotions. Three different deep learning models were imployed, including MultiLayer Perceptron (MLP), Convolutional Neural Network (CNN), and a hybrid model that combines CNN with Bidirectional Long-Short Term Memory (Bi-LSTM). By exploring these different approaches, we were able to identify the most effective model for accurately identifying emotional states from speech signals in real-time situation. Overall, our work demonstrates the effectiveness of the proposed deep learning model, specifically based on CNN+BiLSTM enhanced with data augmentation for the proposed real-time speech emotion recognition.</p></div>\",\"PeriodicalId\":8449,\"journal\":{\"name\":\"Artificial Intelligence Review\",\"volume\":\"58 2\",\"pages\":\"\"},\"PeriodicalIF\":10.7000,\"publicationDate\":\"2024-12-20\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://link.springer.com/content/pdf/10.1007/s10462-024-11065-x.pdf\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Artificial Intelligence Review\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://link.springer.com/article/10.1007/s10462-024-11065-x\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Artificial Intelligence Review","FirstCategoryId":"94","ListUrlMain":"https://link.springer.com/article/10.1007/s10462-024-11065-x","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
摘要
在人与人的交互中,检测情绪通常很容易,因为情绪可以通过面部表情、肢体动作或语言来感知。然而,在人机交互中,检测人的情绪可能是一项挑战。为了改善这种交互,语音情感识别(SER)应运而生,其目标是仅通过声调来识别情感。在这项工作中,我们提出了一种基于深度学习方法和两种高效数据增强技术(如噪声添加和频谱图移动)的 SER 系统。为了评估所提出的系统,我们使用了三个不同的数据集:TESS、EmoDB 和 RAVDESS。我们采用了多种算法,如梅尔频率倒频谱系数(MFCC)、零交叉率(ZCR)、梅尔频谱图、均方根值(RMS)和色度,以选择最合适的代表语音情绪的声音特征。我们采用了三种不同的深度学习模型,包括多层感知器(MLP)、卷积神经网络(CNN)以及一种将 CNN 与双向长短期记忆(Bi-LSTM)相结合的混合模型。通过探索这些不同的方法,我们能够找出最有效的模型,以便从实时语音信号中准确识别情绪状态。总之,我们的工作证明了所提出的深度学习模型的有效性,特别是基于 CNN+BiLSTM 增强数据增强的模型在所提出的实时语音情感识别中的有效性。
Real-time speech emotion recognition using deep learning and data augmentation
In human–human interactions, detecting emotions is often easy as it can be perceived through facial expressions, body gestures, or speech. However, in human–machine interactions, detecting human emotion can be a challenge. To improve this interaction, Speech Emotion Recognition (SER) has emerged, with the goal of recognizing emotions solely through vocal intonation. In this work, we propose a SER system based on deep learning approaches and two efficient data augmentation techniques such as noise addition and spectrogram shifting. To evaluate the proposed system, we used three different datasets: TESS, EmoDB, and RAVDESS. We employe several algorithms such as Mel Frequency Cepstral Coefficients (MFCC), Zero Crossing Rate (ZCR), Mel spectrograms, Root Mean Square Value (RMS), and chroma to select the most appropriate vocal features that represent speech emotions. Three different deep learning models were imployed, including MultiLayer Perceptron (MLP), Convolutional Neural Network (CNN), and a hybrid model that combines CNN with Bidirectional Long-Short Term Memory (Bi-LSTM). By exploring these different approaches, we were able to identify the most effective model for accurately identifying emotional states from speech signals in real-time situation. Overall, our work demonstrates the effectiveness of the proposed deep learning model, specifically based on CNN+BiLSTM enhanced with data augmentation for the proposed real-time speech emotion recognition.
期刊介绍:
Artificial Intelligence Review, a fully open access journal, publishes cutting-edge research in artificial intelligence and cognitive science. It features critical evaluations of applications, techniques, and algorithms, providing a platform for both researchers and application developers. The journal includes refereed survey and tutorial articles, along with reviews and commentary on significant developments in the field.