{"title":"Real-time speech emotion recognition using deep learning and data augmentation","authors":"Chawki Barhoumi, Yassine BenAyed","doi":"10.1007/s10462-024-11065-x","DOIUrl":null,"url":null,"abstract":"<div><p>In human–human interactions, detecting emotions is often easy as it can be perceived through facial expressions, body gestures, or speech. However, in human–machine interactions, detecting human emotion can be a challenge. To improve this interaction, Speech Emotion Recognition (SER) has emerged, with the goal of recognizing emotions solely through vocal intonation. In this work, we propose a SER system based on deep learning approaches and two efficient data augmentation techniques such as noise addition and spectrogram shifting. To evaluate the proposed system, we used three different datasets: TESS, EmoDB, and RAVDESS. We employe several algorithms such as Mel Frequency Cepstral Coefficients (MFCC), Zero Crossing Rate (ZCR), Mel spectrograms, Root Mean Square Value (RMS), and chroma to select the most appropriate vocal features that represent speech emotions. Three different deep learning models were imployed, including MultiLayer Perceptron (MLP), Convolutional Neural Network (CNN), and a hybrid model that combines CNN with Bidirectional Long-Short Term Memory (Bi-LSTM). By exploring these different approaches, we were able to identify the most effective model for accurately identifying emotional states from speech signals in real-time situation. Overall, our work demonstrates the effectiveness of the proposed deep learning model, specifically based on CNN+BiLSTM enhanced with data augmentation for the proposed real-time speech emotion recognition.</p></div>","PeriodicalId":8449,"journal":{"name":"Artificial Intelligence Review","volume":"58 2","pages":""},"PeriodicalIF":10.7000,"publicationDate":"2024-12-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10462-024-11065-x.pdf","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Artificial Intelligence Review","FirstCategoryId":"94","ListUrlMain":"https://link.springer.com/article/10.1007/s10462-024-11065-x","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
In human–human interactions, detecting emotions is often easy as it can be perceived through facial expressions, body gestures, or speech. However, in human–machine interactions, detecting human emotion can be a challenge. To improve this interaction, Speech Emotion Recognition (SER) has emerged, with the goal of recognizing emotions solely through vocal intonation. In this work, we propose a SER system based on deep learning approaches and two efficient data augmentation techniques such as noise addition and spectrogram shifting. To evaluate the proposed system, we used three different datasets: TESS, EmoDB, and RAVDESS. We employe several algorithms such as Mel Frequency Cepstral Coefficients (MFCC), Zero Crossing Rate (ZCR), Mel spectrograms, Root Mean Square Value (RMS), and chroma to select the most appropriate vocal features that represent speech emotions. Three different deep learning models were imployed, including MultiLayer Perceptron (MLP), Convolutional Neural Network (CNN), and a hybrid model that combines CNN with Bidirectional Long-Short Term Memory (Bi-LSTM). By exploring these different approaches, we were able to identify the most effective model for accurately identifying emotional states from speech signals in real-time situation. Overall, our work demonstrates the effectiveness of the proposed deep learning model, specifically based on CNN+BiLSTM enhanced with data augmentation for the proposed real-time speech emotion recognition.
期刊介绍:
Artificial Intelligence Review, a fully open access journal, publishes cutting-edge research in artificial intelligence and cognitive science. It features critical evaluations of applications, techniques, and algorithms, providing a platform for both researchers and application developers. The journal includes refereed survey and tutorial articles, along with reviews and commentary on significant developments in the field.