{"title":"基于注意机制的深度卷积递归神经网络鲁棒语音情感识别","authors":"Che-Wei Huang, Shrikanth S. Narayanan","doi":"10.1109/ICME.2017.8019296","DOIUrl":null,"url":null,"abstract":"We present a deep convolutional recurrent neural network for speech emotion recognition based on the log-Mel filterbank energies, where the convolutional layers are responsible for the discriminative feature learning. Based on the hypothesis that a better understanding of the internal configuration within an utterance would help reduce misclassification, we further propose a convolutional attention mechanism to learn the utterance structure relevant to the task. In addition, we quantitatively measure the performance gain contributed by each module in our model in order to characterize the nature of emotion expressed in speech. The experimental results on the eNTERFACE'05 emotion database validate our hypothesis and also demonstrate an absolute improvement by 4.62% compared to the state-of-the-art approach.","PeriodicalId":330977,"journal":{"name":"2017 IEEE International Conference on Multimedia and Expo (ICME)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2017-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"101","resultStr":"{\"title\":\"Deep convolutional recurrent neural network with attention mechanism for robust speech emotion recognition\",\"authors\":\"Che-Wei Huang, Shrikanth S. Narayanan\",\"doi\":\"10.1109/ICME.2017.8019296\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"We present a deep convolutional recurrent neural network for speech emotion recognition based on the log-Mel filterbank energies, where the convolutional layers are responsible for the discriminative feature learning. Based on the hypothesis that a better understanding of the internal configuration within an utterance would help reduce misclassification, we further propose a convolutional attention mechanism to learn the utterance structure relevant to the task. In addition, we quantitatively measure the performance gain contributed by each module in our model in order to characterize the nature of emotion expressed in speech. The experimental results on the eNTERFACE'05 emotion database validate our hypothesis and also demonstrate an absolute improvement by 4.62% compared to the state-of-the-art approach.\",\"PeriodicalId\":330977,\"journal\":{\"name\":\"2017 IEEE International Conference on Multimedia and Expo (ICME)\",\"volume\":\"1 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2017-07-10\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"101\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2017 IEEE International Conference on Multimedia and Expo (ICME)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICME.2017.8019296\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2017 IEEE International Conference on Multimedia and Expo (ICME)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICME.2017.8019296","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Deep convolutional recurrent neural network with attention mechanism for robust speech emotion recognition
We present a deep convolutional recurrent neural network for speech emotion recognition based on the log-Mel filterbank energies, where the convolutional layers are responsible for the discriminative feature learning. Based on the hypothesis that a better understanding of the internal configuration within an utterance would help reduce misclassification, we further propose a convolutional attention mechanism to learn the utterance structure relevant to the task. In addition, we quantitatively measure the performance gain contributed by each module in our model in order to characterize the nature of emotion expressed in speech. The experimental results on the eNTERFACE'05 emotion database validate our hypothesis and also demonstrate an absolute improvement by 4.62% compared to the state-of-the-art approach.