{"title":"Spatio-Temporal Context Modelling for Speech Emotion Classification","authors":"Md. Asif Jalal, Roger K. Moore, Thomas Hain","doi":"10.1109/ASRU46091.2019.9004037","DOIUrl":null,"url":null,"abstract":"Speech emotion recognition (SER) is a requisite for emotional intelligence that affects the understanding of speech. One of the most crucial tasks is to obtain patterns having a maximum correlation for the emotion classification task from the speech signal while being invariant to the changes in frequency, time and other external distortions. Therefore, learning emotional contextual feature representation independent of speaker and environment is essential. In this paper, a novel spatiotemporal context modelling framework for robust SER is proposed to learn feature representation by using acoustic context expansion with high dimensional feature projection. The framework uses a deep convolutional neural network (CNN) and self-attention network. The CNNs combine spatiotemporal features. The attention network produces high dimensional task-specific features and combines these features for context modelling, which altogether provides a state-of-the-art technique for classifying the extracted patterns for speech emotion. Speech emotion is a categorical perception representing discrete sensory events. The proposed approach is compared with a wide range of architectures on the RAVDESS and IEMOCAP corpora for 8-class and 4-class emotion classification tasks and remarkable gain over state-of-the-art systems are obtained, absolutely 15%, 10% respectively.","PeriodicalId":150913,"journal":{"name":"2019 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU)","volume":"192 5 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"7","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ASRU46091.2019.9004037","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 7
Abstract
Speech emotion recognition (SER) is a requisite for emotional intelligence that affects the understanding of speech. One of the most crucial tasks is to obtain patterns having a maximum correlation for the emotion classification task from the speech signal while being invariant to the changes in frequency, time and other external distortions. Therefore, learning emotional contextual feature representation independent of speaker and environment is essential. In this paper, a novel spatiotemporal context modelling framework for robust SER is proposed to learn feature representation by using acoustic context expansion with high dimensional feature projection. The framework uses a deep convolutional neural network (CNN) and self-attention network. The CNNs combine spatiotemporal features. The attention network produces high dimensional task-specific features and combines these features for context modelling, which altogether provides a state-of-the-art technique for classifying the extracted patterns for speech emotion. Speech emotion is a categorical perception representing discrete sensory events. The proposed approach is compared with a wide range of architectures on the RAVDESS and IEMOCAP corpora for 8-class and 4-class emotion classification tasks and remarkable gain over state-of-the-art systems are obtained, absolutely 15%, 10% respectively.