Ana Carolina Nicolosi da Rocha Gracioso, C. C. Botero Suarez, Clecio Bachini, Francisco Javier Ramirez Fernandez
{"title":"Emotion recognition system using Open Web Platform","authors":"Ana Carolina Nicolosi da Rocha Gracioso, C. C. Botero Suarez, Clecio Bachini, Francisco Javier Ramirez Fernandez","doi":"10.1109/CCST.2013.6922065","DOIUrl":null,"url":null,"abstract":"This paper proposes a model for recognizing emotions through movement of facial muscles inspired by FACS (Facial Action Coding System) and FACSAID (Facial Action Coding System Affect Interpretation Dictionary). The computational implementation of the proposed model, here called WeBSER (Web-Based System for Emotion Recognition), was produced in Open Web Platform and is able to infer the user's emotional state in real time. The images of the user's face are captured using a webcam and emotions are classified using a Computer Vision system that uses the Web as a platform. Given the sequences of images acquired in real time via webcam, the WeBSER performs the following steps: Face detection and segmentation (eyes with eyebrows, nose and mouth); Entering reading points; Classification of emotions based on the movement of the reading points. For face detection and segmentation of face regions such as eyes, nose and mouth, the Viola-Jones method was used. Given the face image and the location of the segmented regions, 20 reading points were identified in image. The movement of each reading point is analyzed relatively to the other points. The direction of the movement of reading points is classified in bands of 45 degrees; Thus, each point can assume one of eight directions or remain stationary. Finally, the classification of emotions is made based on the movement of the reading points. This proposed model has a mean accuracy of 76,6% for determining exact emotions, and 84.4% to indicate uncomfortable states of persons suggesting suspicious behaviors.","PeriodicalId":243791,"journal":{"name":"2013 47th International Carnahan Conference on Security Technology (ICCST)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2013-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2013 47th International Carnahan Conference on Security Technology (ICCST)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CCST.2013.6922065","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 5
Abstract
This paper proposes a model for recognizing emotions through movement of facial muscles inspired by FACS (Facial Action Coding System) and FACSAID (Facial Action Coding System Affect Interpretation Dictionary). The computational implementation of the proposed model, here called WeBSER (Web-Based System for Emotion Recognition), was produced in Open Web Platform and is able to infer the user's emotional state in real time. The images of the user's face are captured using a webcam and emotions are classified using a Computer Vision system that uses the Web as a platform. Given the sequences of images acquired in real time via webcam, the WeBSER performs the following steps: Face detection and segmentation (eyes with eyebrows, nose and mouth); Entering reading points; Classification of emotions based on the movement of the reading points. For face detection and segmentation of face regions such as eyes, nose and mouth, the Viola-Jones method was used. Given the face image and the location of the segmented regions, 20 reading points were identified in image. The movement of each reading point is analyzed relatively to the other points. The direction of the movement of reading points is classified in bands of 45 degrees; Thus, each point can assume one of eight directions or remain stationary. Finally, the classification of emotions is made based on the movement of the reading points. This proposed model has a mean accuracy of 76,6% for determining exact emotions, and 84.4% to indicate uncomfortable states of persons suggesting suspicious behaviors.