{"title":"Weighted Feature Fusion Based Emotional Recognition for Variable-length Speech using DNN","authors":"Sifan Wu, Fei Li, Pengyuan Zhang","doi":"10.1109/IWCMC.2019.8766646","DOIUrl":null,"url":null,"abstract":"Emotion recognition plays an increasingly important role in human-computer interaction systems, which is a key technology in multimedia communication. Because neural networks can automatically learn the intermediate representation of raw speech signal, currently, most methods use Convolutional Neural Network (CNN) to extract information directly from spectrograms, but this may result in the ineffective use of information in hand-crafted features. In this work, a model based on weighted feature fusion method is proposed for emotion recognition of variable-length speech. Since the Chroma-based features are closely related to speech emotions, our model can effectively utilize the useful information in Chromaticity map to improve the performance by combining CNN-based features and Chroma-based features. We evaluated the model on the Interactive Emotional Motion Capture (IEMOCAP) dataset and achieved more than 5% increase in weighted accuracy (WA) and unweighted accuracy (UA), comparing with the existing state-of-the-art methods.","PeriodicalId":363800,"journal":{"name":"2019 15th International Wireless Communications & Mobile Computing Conference (IWCMC)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 15th International Wireless Communications & Mobile Computing Conference (IWCMC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IWCMC.2019.8766646","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 5
Abstract
Emotion recognition plays an increasingly important role in human-computer interaction systems, which is a key technology in multimedia communication. Because neural networks can automatically learn the intermediate representation of raw speech signal, currently, most methods use Convolutional Neural Network (CNN) to extract information directly from spectrograms, but this may result in the ineffective use of information in hand-crafted features. In this work, a model based on weighted feature fusion method is proposed for emotion recognition of variable-length speech. Since the Chroma-based features are closely related to speech emotions, our model can effectively utilize the useful information in Chromaticity map to improve the performance by combining CNN-based features and Chroma-based features. We evaluated the model on the Interactive Emotional Motion Capture (IEMOCAP) dataset and achieved more than 5% increase in weighted accuracy (WA) and unweighted accuracy (UA), comparing with the existing state-of-the-art methods.