N. Chanthaphan, K. Uchimura, T. Satonaka, Tsuyoshi Makioka
{"title":"Facial Emotion Recognition Based on Facial Motion Stream Generated by Kinect","authors":"N. Chanthaphan, K. Uchimura, T. Satonaka, Tsuyoshi Makioka","doi":"10.1109/SITIS.2015.31","DOIUrl":null,"url":null,"abstract":"Nowadays, the human facial emotion recognition has been used in wide range of applications that is directly involved in a human life. Due to the fragility of humans, the performance in these applications has to be improved. In this paper, we describe the novel approach to extract the facial feature from moving pictures. We introduce the facial movement stream, which is derived from the distance measurement between each pair of the coordinates located on human facial wireframe flowing through each frame of the movement. We have proposed the Facial Emotion Recognition Based on Facial Motion Stream generated by Kinect employing two kinds of facial features. The first one was just a simple distance value of each pair-wise coordinates packed into 153-dimensional feature vector per frame. The second one was derived from the first one based on Structured Streaming Skeleton approach and it became 765-dimensional feature vector per frame. We have presented the method to construct the dataset by ourselves since there was no dataset available for our approach. The facial movements of five people were collected in the experiment. The result shows that the average accuracy of SSS feature outperformed the simple distance feature using K-Nearest Neighbors by 10% and that using Support Vector Machine by 26%.","PeriodicalId":128616,"journal":{"name":"2015 11th International Conference on Signal-Image Technology & Internet-Based Systems (SITIS)","volume":"110 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2015-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"19","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2015 11th International Conference on Signal-Image Technology & Internet-Based Systems (SITIS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SITIS.2015.31","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 19
Abstract
Nowadays, the human facial emotion recognition has been used in wide range of applications that is directly involved in a human life. Due to the fragility of humans, the performance in these applications has to be improved. In this paper, we describe the novel approach to extract the facial feature from moving pictures. We introduce the facial movement stream, which is derived from the distance measurement between each pair of the coordinates located on human facial wireframe flowing through each frame of the movement. We have proposed the Facial Emotion Recognition Based on Facial Motion Stream generated by Kinect employing two kinds of facial features. The first one was just a simple distance value of each pair-wise coordinates packed into 153-dimensional feature vector per frame. The second one was derived from the first one based on Structured Streaming Skeleton approach and it became 765-dimensional feature vector per frame. We have presented the method to construct the dataset by ourselves since there was no dataset available for our approach. The facial movements of five people were collected in the experiment. The result shows that the average accuracy of SSS feature outperformed the simple distance feature using K-Nearest Neighbors by 10% and that using Support Vector Machine by 26%.