V. Huynh, Hyung-Jeong Yang, Gueesang Lee, Soohyung Kim, In Seop Na
{"title":"Emotion Recognition by Integrating Eye Movement Analysis and Facial Expression Model","authors":"V. Huynh, Hyung-Jeong Yang, Gueesang Lee, Soohyung Kim, In Seop Na","doi":"10.1145/3310986.3311001","DOIUrl":null,"url":null,"abstract":"This paper presents an emotion recognition method which combines knowledge from the face and eye movements to improve the system accuracy. Our method has three fundamental stages to recognize the emotion. Firstly, we use a deep learning model to obtain the probability of a sample belonging to each emotion. Then, the eye movement features are extracted from an open-source framework which implements algorithms that demonstrated state-of-the-art results in this task. A new set of 51 features have been used to obtain related information about each emotion for the corresponding sample. Finally, the emotion for a sample is recognized based on the combination of the knowledge from the two previous stages. Experiment on the validation set of Acted Facial Expressions in the Wild (AFEW) dataset shows that the eye movements can make 2.87% improvement in the accuracy for the face model.","PeriodicalId":252781,"journal":{"name":"Proceedings of the 3rd International Conference on Machine Learning and Soft Computing","volume":"78 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-01-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"7","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 3rd International Conference on Machine Learning and Soft Computing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3310986.3311001","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 7
Abstract
This paper presents an emotion recognition method which combines knowledge from the face and eye movements to improve the system accuracy. Our method has three fundamental stages to recognize the emotion. Firstly, we use a deep learning model to obtain the probability of a sample belonging to each emotion. Then, the eye movement features are extracted from an open-source framework which implements algorithms that demonstrated state-of-the-art results in this task. A new set of 51 features have been used to obtain related information about each emotion for the corresponding sample. Finally, the emotion for a sample is recognized based on the combination of the knowledge from the two previous stages. Experiment on the validation set of Acted Facial Expressions in the Wild (AFEW) dataset shows that the eye movements can make 2.87% improvement in the accuracy for the face model.