{"title":"Rotated asymetrique haar features for face detection","authors":"M. Oualla, A. Sadiq","doi":"10.1109/CIST.2016.7805094","DOIUrl":null,"url":null,"abstract":"In our previous work, we have proposed a new approach to detect rotated object at distinct angles using the ViolaJones detector. This approach consists in feeding the groups of Haar features presented by Viola & Jones, Lienhart and others by other features which are rotated by any angle. In this paper we have extended this set of features by others called normal and rotated asymmetric Haar features. To concretize our method, we test our algorithm on two databases (Umist and CMU-PIE), containing a set of faces attributed to many variations in scale, location, orientation (in-plane rotation), pose (out-of-plane rotation), facial expression, lighting conditions, occlusions, etc.","PeriodicalId":196827,"journal":{"name":"2016 4th IEEE International Colloquium on Information Science and Technology (CiSt)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2016-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2016 4th IEEE International Colloquium on Information Science and Technology (CiSt)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CIST.2016.7805094","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3
Abstract
In our previous work, we have proposed a new approach to detect rotated object at distinct angles using the ViolaJones detector. This approach consists in feeding the groups of Haar features presented by Viola & Jones, Lienhart and others by other features which are rotated by any angle. In this paper we have extended this set of features by others called normal and rotated asymmetric Haar features. To concretize our method, we test our algorithm on two databases (Umist and CMU-PIE), containing a set of faces attributed to many variations in scale, location, orientation (in-plane rotation), pose (out-of-plane rotation), facial expression, lighting conditions, occlusions, etc.