M. Rusydi, Rizka Hadelina, O. W. Samuel, A. W. Setiawan, C. Machbub
{"title":"Facial Features Extraction Based on Distance and Area of Points for Expression Recognition","authors":"M. Rusydi, Rizka Hadelina, O. W. Samuel, A. W. Setiawan, C. Machbub","doi":"10.1109/ACIRS.2019.8936005","DOIUrl":null,"url":null,"abstract":"Facial expression is a means of non-verbal communication that provides information from which an individual’s emotional status/mind could be decoded. Facial expression recognition has been applied in various fields and it has become an increasingly interesting research field in the recent years. A significantly important aspect of facial expression recognition is the feature extraction process. Hence, this paper presents a new facial feature extraction method for expression detection. The proposed method is based on the computation of distances and areas that are formed by two or three facial points provided by Kinect v.2. This computation is used to obtained the facial features. Then, the features which potentially can be used to distinguish happiness, disgust, surprise and anger expressions, will be selected. From the results of the extraction process, a total of 6 facial features were formed from the 12 points that are located arround the mouth, eyebrows, and cheeks. The facial features were later applied as inputs into an artificial neural network model built for expression prediction. The overall result shows that the proposed method could achieve 75% success rate in correctly predicting the expressions of the participants.","PeriodicalId":338050,"journal":{"name":"2019 4th Asia-Pacific Conference on Intelligent Robot Systems (ACIRS)","volume":"33 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 4th Asia-Pacific Conference on Intelligent Robot Systems (ACIRS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ACIRS.2019.8936005","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2
Abstract
Facial expression is a means of non-verbal communication that provides information from which an individual’s emotional status/mind could be decoded. Facial expression recognition has been applied in various fields and it has become an increasingly interesting research field in the recent years. A significantly important aspect of facial expression recognition is the feature extraction process. Hence, this paper presents a new facial feature extraction method for expression detection. The proposed method is based on the computation of distances and areas that are formed by two or three facial points provided by Kinect v.2. This computation is used to obtained the facial features. Then, the features which potentially can be used to distinguish happiness, disgust, surprise and anger expressions, will be selected. From the results of the extraction process, a total of 6 facial features were formed from the 12 points that are located arround the mouth, eyebrows, and cheeks. The facial features were later applied as inputs into an artificial neural network model built for expression prediction. The overall result shows that the proposed method could achieve 75% success rate in correctly predicting the expressions of the participants.