{"title":"Audio-Visual Emotion Recognition System Using Multi-Modal Features","authors":"Anand Handa, Rashi Agarwal, Narendra Kohli","doi":"10.4018/IJCINI.20211001.OA34","DOIUrl":null,"url":null,"abstract":"Due to the highly variant face geometry and appearances, facial expression recognition (FER) is still a challenging problem. CNN can characterize 2D signals. Therefore, for emotion recognition in a video, the authors propose a feature selection model in AlexNet architecture to extract and filter facial features automatically. Similarly, for emotion recognition in audio, the authors use a deep LSTM-RNN. Finally, they propose a probabilistic model for the fusion of audio and visual models using facial features and speech of a subject. The model combines all the extracted features and use them to train the linear SVM (support vector machine) classifiers. The proposed model outperforms the other existing models and achieves state-of-the-art performance for audio, visual, and fusion models. The model classifies the seven known facial expressions, namely anger, happy, surprise, fear, disgust, sad, and neutral, on the eNTERFACE’05 dataset with an overall accuracy of 76.61%.","PeriodicalId":0,"journal":{"name":"","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.4018/IJCINI.20211001.OA34","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Due to the highly variant face geometry and appearances, facial expression recognition (FER) is still a challenging problem. CNN can characterize 2D signals. Therefore, for emotion recognition in a video, the authors propose a feature selection model in AlexNet architecture to extract and filter facial features automatically. Similarly, for emotion recognition in audio, the authors use a deep LSTM-RNN. Finally, they propose a probabilistic model for the fusion of audio and visual models using facial features and speech of a subject. The model combines all the extracted features and use them to train the linear SVM (support vector machine) classifiers. The proposed model outperforms the other existing models and achieves state-of-the-art performance for audio, visual, and fusion models. The model classifies the seven known facial expressions, namely anger, happy, surprise, fear, disgust, sad, and neutral, on the eNTERFACE’05 dataset with an overall accuracy of 76.61%.