S. Gievska, Kiril Koroveshovski, Natasha Tagasovska
{"title":"Bimodal feature-based fusion for real-time emotion recognition in a mobile context","authors":"S. Gievska, Kiril Koroveshovski, Natasha Tagasovska","doi":"10.1109/ACII.2015.7344602","DOIUrl":null,"url":null,"abstract":"This research explores the viability of a bimodal fusion of linguistic and acoustic cues in speech to help in realtime emotion recognition in a mobile application that steers the interaction dialogue in tune with user's emotions. For capturing affect at the language level, we have utilized both, machine learning and valence assessment of the words carrying emotional connotations. The indicative values of acoustic cues in speech are of special concern in this research and an optimized feature set is proposed. We highlight the results of both independent evaluations of the underlying linguistic and acoustic processing components. We present a study and ensuing discussion on the performance metrics of a Logistic Model Tree that has outperformed the other classifiers considered for the fusion process. The results reinforce the notion that capturing the sound interplay between the diverse set of features is crucial for confronting the subtleties of affect in speech that so often elude the text- or acoustic-only approaches to emotion recognition.","PeriodicalId":6863,"journal":{"name":"2015 International Conference on Affective Computing and Intelligent Interaction (ACII)","volume":"72 1","pages":"401-407"},"PeriodicalIF":0.0000,"publicationDate":"2015-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"8","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2015 International Conference on Affective Computing and Intelligent Interaction (ACII)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ACII.2015.7344602","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 8
Abstract
This research explores the viability of a bimodal fusion of linguistic and acoustic cues in speech to help in realtime emotion recognition in a mobile application that steers the interaction dialogue in tune with user's emotions. For capturing affect at the language level, we have utilized both, machine learning and valence assessment of the words carrying emotional connotations. The indicative values of acoustic cues in speech are of special concern in this research and an optimized feature set is proposed. We highlight the results of both independent evaluations of the underlying linguistic and acoustic processing components. We present a study and ensuing discussion on the performance metrics of a Logistic Model Tree that has outperformed the other classifiers considered for the fusion process. The results reinforce the notion that capturing the sound interplay between the diverse set of features is crucial for confronting the subtleties of affect in speech that so often elude the text- or acoustic-only approaches to emotion recognition.