{"title":"Bimodal approach in emotion recognition using speech and facial expressions","authors":"S. Emerich, E. Lupu, A. Apatean","doi":"10.1109/ISSCS.2009.5206101","DOIUrl":null,"url":null,"abstract":"This paper aims to present a multimodal approach in emotion recognition which integrates information from both facial expressions and speech signal. Using two acted databases on different subjects, we were able to emphasize six emotions: sadness, anger, happiness, disgust, fear and neutral state. The models in the system were designed and tested by using a Support Vector Machine classifier. Firstly, the analysis of the strengths and the limitations of the systems based only on facial expressions or speech signal was performed. Data was then fused at the feature level. The results show that in this case the performance and the robustness of the emotion recognition system have been improved.","PeriodicalId":277587,"journal":{"name":"2009 International Symposium on Signals, Circuits and Systems","volume":"397 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2009-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"17","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2009 International Symposium on Signals, Circuits and Systems","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ISSCS.2009.5206101","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 17
Abstract
This paper aims to present a multimodal approach in emotion recognition which integrates information from both facial expressions and speech signal. Using two acted databases on different subjects, we were able to emphasize six emotions: sadness, anger, happiness, disgust, fear and neutral state. The models in the system were designed and tested by using a Support Vector Machine classifier. Firstly, the analysis of the strengths and the limitations of the systems based only on facial expressions or speech signal was performed. Data was then fused at the feature level. The results show that in this case the performance and the robustness of the emotion recognition system have been improved.