Yacine Yaddaden, Mehdi Adda, A. Bouzouane, S. Gaboury, B. Bouchard
{"title":"基于混合的人机交互面部表情识别方法","authors":"Yacine Yaddaden, Mehdi Adda, A. Bouzouane, S. Gaboury, B. Bouchard","doi":"10.1109/MMSP.2018.8547081","DOIUrl":null,"url":null,"abstract":"Human-Computer Interaction represents an important component in each device designed to be used by humans. Moreover, improving interaction leads to a better user experience and effectiveness of the designed device. One of the most intuitive ways of interaction remains emotions since they allow to understand and even predict the human behavior and react to it. Nevertheless, emotion recognition still challenging since emotions might be complex and subtle. In this paper, we introduce a new hybrid-based approach to identify emotions through facial expressions. We combine two different feature types that are geometric-based (from facial fiducial points) and appearance-based (from Discrete Wavelet Transform coefficients). Each one provides specific information about the six basic emotions to identify. Furthermore, we propose to use a multi-class Support Vector Machine architecture for classification and Extremely Randomized Trees as feature selection technique. Carried experimentation attests to the effectiveness of our approach since it yields 96.11%, 91.79% and 99.05% with three benchmark facial expression datasets namely JAFFE, KDEF and RaFD.","PeriodicalId":137522,"journal":{"name":"2018 IEEE 20th International Workshop on Multimedia Signal Processing (MMSP)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":"{\"title\":\"Hybrid-Based Facial Expression Recognition Approach for Human-Computer Interaction\",\"authors\":\"Yacine Yaddaden, Mehdi Adda, A. Bouzouane, S. Gaboury, B. Bouchard\",\"doi\":\"10.1109/MMSP.2018.8547081\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Human-Computer Interaction represents an important component in each device designed to be used by humans. Moreover, improving interaction leads to a better user experience and effectiveness of the designed device. One of the most intuitive ways of interaction remains emotions since they allow to understand and even predict the human behavior and react to it. Nevertheless, emotion recognition still challenging since emotions might be complex and subtle. In this paper, we introduce a new hybrid-based approach to identify emotions through facial expressions. We combine two different feature types that are geometric-based (from facial fiducial points) and appearance-based (from Discrete Wavelet Transform coefficients). Each one provides specific information about the six basic emotions to identify. Furthermore, we propose to use a multi-class Support Vector Machine architecture for classification and Extremely Randomized Trees as feature selection technique. Carried experimentation attests to the effectiveness of our approach since it yields 96.11%, 91.79% and 99.05% with three benchmark facial expression datasets namely JAFFE, KDEF and RaFD.\",\"PeriodicalId\":137522,\"journal\":{\"name\":\"2018 IEEE 20th International Workshop on Multimedia Signal Processing (MMSP)\",\"volume\":\"1 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2018-08-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"5\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2018 IEEE 20th International Workshop on Multimedia Signal Processing (MMSP)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/MMSP.2018.8547081\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2018 IEEE 20th International Workshop on Multimedia Signal Processing (MMSP)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/MMSP.2018.8547081","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Hybrid-Based Facial Expression Recognition Approach for Human-Computer Interaction
Human-Computer Interaction represents an important component in each device designed to be used by humans. Moreover, improving interaction leads to a better user experience and effectiveness of the designed device. One of the most intuitive ways of interaction remains emotions since they allow to understand and even predict the human behavior and react to it. Nevertheless, emotion recognition still challenging since emotions might be complex and subtle. In this paper, we introduce a new hybrid-based approach to identify emotions through facial expressions. We combine two different feature types that are geometric-based (from facial fiducial points) and appearance-based (from Discrete Wavelet Transform coefficients). Each one provides specific information about the six basic emotions to identify. Furthermore, we propose to use a multi-class Support Vector Machine architecture for classification and Extremely Randomized Trees as feature selection technique. Carried experimentation attests to the effectiveness of our approach since it yields 96.11%, 91.79% and 99.05% with three benchmark facial expression datasets namely JAFFE, KDEF and RaFD.