{"title":"Multi-scale dictionaries based fingerprint orientation field estimation","authors":"Chunjie Chen, Jianjiang Feng, Jie Zhou","doi":"10.1109/ICB.2016.7550071","DOIUrl":"https://doi.org/10.1109/ICB.2016.7550071","url":null,"abstract":"Orientation field estimation is significantly important for fingerprint recognition. Dictionary based algorithm and its variant, localized dictionaries based algorithm have shown promising performance. In this paper, we extend the original dictionary based algorithm to a multi-scale version. The motivation is that small scale dictionary is more accurate while large scale dictionary is more robust against image noise. Hence information from orientation fields of different scales can be integrated to obtain better results. A multi-layer MRF model is used to formulate and solve the proposed problem. Experimental results on challenging latent fingerprint database demonstrate the advantages of the proposed algorithm.","PeriodicalId":308715,"journal":{"name":"2016 International Conference on Biometrics (ICB)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129835852","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nianfeng Liu, Haiqing Li, Man Zhang, Jing Liu, Zhenan Sun, T. Tan
{"title":"Accurate iris segmentation in non-cooperative environments using fully convolutional networks","authors":"Nianfeng Liu, Haiqing Li, Man Zhang, Jing Liu, Zhenan Sun, T. Tan","doi":"10.1109/ICB.2016.7550055","DOIUrl":"https://doi.org/10.1109/ICB.2016.7550055","url":null,"abstract":"Conventional iris recognition requires controlled conditions (e.g., close acquisition distance and stop-and-stare scheme) and high user cooperation for image acquisition. Non-cooperative acquisition environments introduce many adverse factors such as blur, off-axis, occlusions and specular reflections, which challenge existing iris segmentation approaches. In this paper, we present two iris segmentation models, namely hierarchical convolutional neural networks (HCNNs) and multi-scale fully convolutional network (MFCNs), for noisy iris images acquired at-a-distance and on-the-move. Both models automatically locate iris pixels without handcrafted features or rules. Moreover, the features and classifiers are jointly optimized. They are end-to-end models which require no further pre- and post-processing and outperform other state-of-the-art methods. Compared with HCNNs, MFCNs take input of arbitrary size and produces correspondingly-sized output without sliding window prediction, which makes MFCNs more efficient. The shallow, fine layers and deep, global layers are combined in MFCNs to capture both the texture details and global structure of iris patterns. Experimental results show that MFCNs are more robust than HCNNs to noises, and can greatly improve the current state-of-the-arts by 25.62% and 13.24% on the UBIRIS.v2 and CASIA.v4-distance databases, respectively.","PeriodicalId":308715,"journal":{"name":"2016 International Conference on Biometrics (ICB)","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128678016","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
N. Susyanto, R. Veldhuis, L. Spreeuwers, C. Klaassen
{"title":"Two-step calibration method for multi-algorithm score-based face recognition systems by minimizing discrimination loss","authors":"N. Susyanto, R. Veldhuis, L. Spreeuwers, C. Klaassen","doi":"10.1109/ICB.2016.7550094","DOIUrl":"https://doi.org/10.1109/ICB.2016.7550094","url":null,"abstract":"We propose a new method for combining multi-algorithm score-based face recognition systems, which we call the two-step calibration method. Typically, algorithms for face recognition systems produce dependent scores. The two-step method is based on parametric copulas to handle this dependence. Its goal is to minimize discrimination loss. For synthetic and real databases (NIST-face and Face3D) we will show that our method is accurate and reliable using the cost of log likelihood ratio and the information-theoretical empirical cross-entropy (ECE).","PeriodicalId":308715,"journal":{"name":"2016 International Conference on Biometrics (ICB)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128431805","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Z. Boulkenafet, Jukka Komulainen, Xiaoyi Feng, A. Hadid
{"title":"Scale space texture analysis for face anti-spoofing","authors":"Z. Boulkenafet, Jukka Komulainen, Xiaoyi Feng, A. Hadid","doi":"10.1109/ICB.2016.7550078","DOIUrl":"https://doi.org/10.1109/ICB.2016.7550078","url":null,"abstract":"Face spoofing detection (i.e. face anti-spoofing) is emerging as a new research area and has already attracted a good number of works during the past five years. This paper addresses for the first time the key problem of the variation in the input image quality and resolution in face anti-spoofing. In contrast to most existing works aiming at extracting multiscale descriptors from the original face images, we derive a new multiscale space to represent the face images before texture feature extraction. The new multiscale space representation is derived through multiscale filtering. Three multiscale filtering methods are considered including Gaussian scale space, Difference of Gaussian scale space and Multiscale Retinex. Extensive experiments on three challenging and publicly available face anti-spoofing databases demonstrate the effectiveness of our proposed multiscale space representation in improving the performance of face spoofing detection based on gray-scale and color texture descriptors.","PeriodicalId":308715,"journal":{"name":"2016 International Conference on Biometrics (ICB)","volume":"60 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125798194","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Luca Ghiani, G. Marcialis, F. Roli, Pierluigi Tuveri
{"title":"User-specific effects in Fingerprint Presentation Attacks Detection: Insights for future research","authors":"Luca Ghiani, G. Marcialis, F. Roli, Pierluigi Tuveri","doi":"10.1109/ICB.2016.7550081","DOIUrl":"https://doi.org/10.1109/ICB.2016.7550081","url":null,"abstract":"A fingerprint presentation attacks detector (FPAD) is designed to obtain a certain performance regardless of the targeted user population. However, two recent works on facial traits showed that a PAD system can exploit very useful information from the targeted user population. In this paper, we explored the existence of that kind of information in fingerprints when textural features are adopted. We show by experiments that such features embed not only intrinsic differences of the given fingerprint replica with respect to a generic live fingerprint, but also contains characteristics present in other fingers of the same user, and characteristics extracted directly from spoofs of the targeted fingerprint itself. These interesting evidences could lead to novel developments in the design of future FPADs.","PeriodicalId":308715,"journal":{"name":"2016 International Conference on Biometrics (ICB)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116511623","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"ICB-RW 2016: International challenge on biometric recognition in the wild","authors":"J. Neves, Hugo Proença","doi":"10.1109/ICB.2016.7550066","DOIUrl":"https://doi.org/10.1109/ICB.2016.7550066","url":null,"abstract":"Biometric recognition in totally wild conditions, such as the observed in visual surveillance scenarios has not been achieved yet. The ICB-RW competition was promoted to support this endeavor, being the first biometric challenge carried out in data that realistically result from surveillance scenarios. The competition relied on an innovative master-slave surveillance system for the acquisition of face imagery at-a-distance and on-the-move. This paper describes the competition details and reports the performance achieved by the participants algorithms.","PeriodicalId":308715,"journal":{"name":"2016 International Conference on Biometrics (ICB)","volume":"89 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124233187","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Biometric recognition of surgically altered periocular region: A comprehensive study","authors":"K. Raja, Ramachandra Raghavendra, C. Busch","doi":"10.1109/ICB.2016.7550070","DOIUrl":"https://doi.org/10.1109/ICB.2016.7550070","url":null,"abstract":"Wide acceptance of biometrics as an authentication mode has led to investigation of multiple modalities such as face, periocular, iris for the long term robustness. Due to various deformities arising out of deteriorating health, need for enhancing the beauty by choice or to fix the injury as a result of trauma or aging, people tend to undergo surgery. However, such surgeries do not guarantee the restoration of physical biometric characteristics (face, periocular, iris etc.) to original appearance and thereby impacting the performance in biometric identifications. Among many physical biometric characteristics, periocular recognition is widely accepted for authentication purposes. This work studies the impact of periocular surgeries on biometric performance. To this extent, we introduce a new large scale periocular surgery database comprising of 402 unique periocular images acquired before and after the surgery. This is the first work that provides comprehensive study on evaluating the impact of surgeries in periocular region on periocular recognition. Extensive experiments are carried out on a newly created dataset using 11 different state-of-art periocular recognition schemes. Further, we also explore score level fusion of these algorithms. Results obtained on the newly created large scale database indicate the degraded identification performance of both the state-of-art and fusion algorithms.","PeriodicalId":308715,"journal":{"name":"2016 International Conference on Biometrics (ICB)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123761176","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Bootstrapping Joint Bayesian model for robust face verification","authors":"Cheng Cheng, Junliang Xing, Youji Feng, Deling Li, Xiangdong Zhou","doi":"10.1109/ICB.2016.7550088","DOIUrl":"https://doi.org/10.1109/ICB.2016.7550088","url":null,"abstract":"Generative Bayesian models have exhibited good performance on the face verification problem, i.e., determining whether two faces are from the same person. As one of the most representative methods, the Joint Bayesian (JB) model represents two faces jointly by introducing some appropriate priors, providing better separability between different face classes. The EM-like learning algorithm of the JB model, however, are occasionally observed to have unsatisfactory converge property during the iterative training process. In this paper, we present a Bootstrapping Joint Bayesian (BJB) model which demonstrates good converging behavior. The BJB model explicitly addresses the classification difficulties of different classes by gradually re-weighting the training samples and driving the Bayesian models to pay more attentions to the hard training samples. Experiments on a new challenging benchmark demonstrate promising results of the proposed model, compared to the baseline Bayesian models.","PeriodicalId":308715,"journal":{"name":"2016 International Conference on Biometrics (ICB)","volume":"184 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122466876","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Face attribute prediction using off-the-shelf CNN features","authors":"Yang Zhong, Josephine Sullivan, Haibo Li","doi":"10.1109/ICB.2016.7550092","DOIUrl":"https://doi.org/10.1109/ICB.2016.7550092","url":null,"abstract":"Predicting attributes from face images in the wild is a challenging computer vision problem. To automatically describe face attributes from face containing images, traditionally one needs to cascade three technical blocks - face localization, facial descriptor construction, and attribute classification - in a pipeline. As a typical classification problem, face attribute prediction has been addressed using deep learning. Current state-of-the-art performance was achieved by using two cascaded Convolutional Neural Networks (CNNs), which were specifically trained to learn face localization and attribute description. In this paper, we experiment with an alternative way of employing the power of deep representations from CNNs. Combining with conventional face localization techniques, we use off-the-shelf architectures trained for face recognition to build facial descriptors. Recognizing that the describable face attributes are diverse, our face descriptors are constructed from different levels of the CNNs for different attributes to best facilitate face attribute prediction. Experiments on two large datasets, LFWA and CelebA, show that our approach is entirely comparable to the state-of-the-art. Our findings not only demonstrate an efficient face attribute prediction approach, but also raise an important question: how to leverage the power of off-the-shelf CNN representations for novel tasks.","PeriodicalId":308715,"journal":{"name":"2016 International Conference on Biometrics (ICB)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129261239","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}