Anwar Saeed, A. Al-Hamadi, R. Niese, Moftah Elzobi
{"title":"Effective geometric features for human emotion recognition","authors":"Anwar Saeed, A. Al-Hamadi, R. Niese, Moftah Elzobi","doi":"10.1109/ICOSP.2012.6491565","DOIUrl":null,"url":null,"abstract":"Human face carries variety of useful information. For example, person's emotion, behavior, and pain can be perceived from his facial expressions. In this paper, we make full use of eight fiducial facial points to extract geometric features used after that to infer the universal human emotions (happy, surprise, anger, disgust, fear, and sadness). We compared our results with results obtained by two different algorithms, representing the state of the art, on two separated databases. We show using features from eight facial points, our approach performs as well as an algorithm that utilizes features extracted from 68 fiducial facial points and as well as another algorithm that uses hundreds of texture features.","PeriodicalId":143331,"journal":{"name":"2012 IEEE 11th International Conference on Signal Processing","volume":"54 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2012-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"18","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2012 IEEE 11th International Conference on Signal Processing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICOSP.2012.6491565","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 18
Abstract
Human face carries variety of useful information. For example, person's emotion, behavior, and pain can be perceived from his facial expressions. In this paper, we make full use of eight fiducial facial points to extract geometric features used after that to infer the universal human emotions (happy, surprise, anger, disgust, fear, and sadness). We compared our results with results obtained by two different algorithms, representing the state of the art, on two separated databases. We show using features from eight facial points, our approach performs as well as an algorithm that utilizes features extracted from 68 fiducial facial points and as well as another algorithm that uses hundreds of texture features.