{"title":"Face validation using 3D information from single calibrated camera","authors":"N. Katsarakis, Aristodemos Pnevmatikakis","doi":"10.1109/ICDSP.2009.5201140","DOIUrl":null,"url":null,"abstract":"Detection of faces in cluttered scenes under arbitrary imaging conditions (pose, expression, illumination and distance) is prone to miss and false positive errors. The well-established approach of using boosted cascades of simple classifiers addresses the problem of missing faces by using fewer stages in the cascade. This constrains the misses by making detection easier, but increases the false positives. False positives can be reduced by validating the detected image regions as faces. This has been accomplished using color and pattern information of the detected image regions. In this paper we propose a novel face validation method based on 3D position estimates from a single calibrated camera. This is done by assuming a typical face width; hence the widths of the detected image regions lead to target position estimates. Detected image regions with extreme position estimates can then be discarded. We apply our method on the rich dataset of the CLEAR2007 evaluation campaign, comprising 49 thousand annotated indoors images, recorded at five different sites, from four different cameras per site, depicting approximately 122 thousand faces. Our method yields very accurate 3D position estimates, leading to superior results compared to color- and pattern-based face validation methods.","PeriodicalId":409669,"journal":{"name":"2009 16th International Conference on Digital Signal Processing","volume":"8 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2009-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2009 16th International Conference on Digital Signal Processing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICDSP.2009.5201140","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3
Abstract
Detection of faces in cluttered scenes under arbitrary imaging conditions (pose, expression, illumination and distance) is prone to miss and false positive errors. The well-established approach of using boosted cascades of simple classifiers addresses the problem of missing faces by using fewer stages in the cascade. This constrains the misses by making detection easier, but increases the false positives. False positives can be reduced by validating the detected image regions as faces. This has been accomplished using color and pattern information of the detected image regions. In this paper we propose a novel face validation method based on 3D position estimates from a single calibrated camera. This is done by assuming a typical face width; hence the widths of the detected image regions lead to target position estimates. Detected image regions with extreme position estimates can then be discarded. We apply our method on the rich dataset of the CLEAR2007 evaluation campaign, comprising 49 thousand annotated indoors images, recorded at five different sites, from four different cameras per site, depicting approximately 122 thousand faces. Our method yields very accurate 3D position estimates, leading to superior results compared to color- and pattern-based face validation methods.