{"title":"结合经验模态分解和深度神经网络的语音信号情感识别","authors":"Shing Tai Pan, Ching Fa Chen, Chuan-Cheng Hong","doi":"10.54646/bijiam.2023.11","DOIUrl":null,"url":null,"abstract":"This paper proposes a novel method for speech emotion recognition. Empirical mode decomposition (EMD) isapplied in this paper for the extraction of emotional features from speeches, and a deep neural network (DNN)is used to classify speech emotions. This paper enhances the emotional components in speech signals by usingEMD with acoustic feature Mel-Scale Frequency Cepstral Coefficients (MFCCs) to improve the recognition ratesof emotions from speeches using the classifier DNN. In this paper, EMD is first used to decompose the speechsignals, which contain emotional components into multiple intrinsic mode functions (IMFs), and then emotionalfeatures are derived from the IMFs and are calculated using MFCC. Then, the emotional features are used to trainthe DNN model. Finally, a trained model that could recognize the emotional signals is then used to identify emotionsin speeches. Experimental results reveal that the proposed method is effective.","PeriodicalId":231453,"journal":{"name":"BOHR International Journal of Internet of things, Artificial Intelligence and Machine Learning","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Emotion recognition based on speech signals by combiningempirical mode decomposition and deep neural network\",\"authors\":\"Shing Tai Pan, Ching Fa Chen, Chuan-Cheng Hong\",\"doi\":\"10.54646/bijiam.2023.11\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"This paper proposes a novel method for speech emotion recognition. Empirical mode decomposition (EMD) isapplied in this paper for the extraction of emotional features from speeches, and a deep neural network (DNN)is used to classify speech emotions. This paper enhances the emotional components in speech signals by usingEMD with acoustic feature Mel-Scale Frequency Cepstral Coefficients (MFCCs) to improve the recognition ratesof emotions from speeches using the classifier DNN. In this paper, EMD is first used to decompose the speechsignals, which contain emotional components into multiple intrinsic mode functions (IMFs), and then emotionalfeatures are derived from the IMFs and are calculated using MFCC. Then, the emotional features are used to trainthe DNN model. Finally, a trained model that could recognize the emotional signals is then used to identify emotionsin speeches. Experimental results reveal that the proposed method is effective.\",\"PeriodicalId\":231453,\"journal\":{\"name\":\"BOHR International Journal of Internet of things, Artificial Intelligence and Machine Learning\",\"volume\":\"1 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"1900-01-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"BOHR International Journal of Internet of things, Artificial Intelligence and Machine Learning\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.54646/bijiam.2023.11\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"BOHR International Journal of Internet of things, Artificial Intelligence and Machine Learning","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.54646/bijiam.2023.11","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
摘要
提出了一种新的语音情感识别方法。本文将经验模式分解(EMD)用于语音情感特征的提取,并利用深度神经网络(DNN)对语音情感进行分类。本文将emd与声学特征Mel-Scale Frequency Cepstral Coefficients (MFCCs)相结合,增强语音信号中的情绪成分,提高分类器DNN对语音情绪的识别率。本文首先利用EMD将包含情感成分的语音信号分解为多个内禀模态函数(IMFs),然后从IMFs中提取情感特征,并利用MFCC计算情感特征。然后,将情感特征用于DNN模型的训练。最后,一个可以识别情绪信号的训练模型被用来识别演讲中的情绪。实验结果表明,该方法是有效的。
Emotion recognition based on speech signals by combiningempirical mode decomposition and deep neural network
This paper proposes a novel method for speech emotion recognition. Empirical mode decomposition (EMD) isapplied in this paper for the extraction of emotional features from speeches, and a deep neural network (DNN)is used to classify speech emotions. This paper enhances the emotional components in speech signals by usingEMD with acoustic feature Mel-Scale Frequency Cepstral Coefficients (MFCCs) to improve the recognition ratesof emotions from speeches using the classifier DNN. In this paper, EMD is first used to decompose the speechsignals, which contain emotional components into multiple intrinsic mode functions (IMFs), and then emotionalfeatures are derived from the IMFs and are calculated using MFCC. Then, the emotional features are used to trainthe DNN model. Finally, a trained model that could recognize the emotional signals is then used to identify emotionsin speeches. Experimental results reveal that the proposed method is effective.