John S. Bernhard, Jeremiah R. Barr, K. Bowyer, P. Flynn
{"title":"Near-IR to visible light face matching: Effectiveness of pre-processing options for commercial matchers","authors":"John S. Bernhard, Jeremiah R. Barr, K. Bowyer, P. Flynn","doi":"10.1109/BTAS.2015.7358780","DOIUrl":"https://doi.org/10.1109/BTAS.2015.7358780","url":null,"abstract":"The use of near-IR images for face recognition has been proposed as a means to address illumination issues that can hinder standard visible light face matching. However, most existing non-experimental databases contain visible light images. This makes the matching of near-IR face images to visible light face images an interesting and useful challenge. Image pre-processing techniques can potentially be used to help reduce the differences between near-IR and visible light images, with the goal of improving matching accuracy. We evaluate the use of several such techniques in combination with commercial matchers and show that simply extracting the red plane results in a comparable improvement in accuracy. In addition, we show that many of the pre-processing techniques hinder the ability of existing commercial matchers to extract templates. We also make available a new dataset called Near Infrared Visible Light Database (ND-NIVL) consisting of visible light and near-IR face images with accompanying baseline performance for several commercial matchers.","PeriodicalId":404972,"journal":{"name":"2015 IEEE 7th International Conference on Biometrics Theory, Applications and Systems (BTAS)","volume":"71 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122054623","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Assessment of iris recognition reliability for eyes affected by ocular pathologies","authors":"Mateusz Trokielewicz, A. Czajka, P. Maciejewicz","doi":"10.1109/BTAS.2015.7358747","DOIUrl":"https://doi.org/10.1109/BTAS.2015.7358747","url":null,"abstract":"This paper presents an analysis of how the iris recognition is impacted by eye diseases and an appropriate dataset comprising 2996 iris images of 230 distinct eyes (including 184 illness-affected eyes representing more than 20 different eye conditions). The images were collected in near infrared and visible light during a routine ophthalmological practice. The experimental study shows four valuable results. First, the enrollment process is highly sensitive to those eye conditions that make the iris obstructed or introduce geometrical distortions. Second, even those conditions that do not produce visible changes to the iris structure may increase the dissimilarity among samples of the same eyes. Third, eye conditions affecting iris geometry, its tissue structure or producing obstructions significantly decrease the iris recognition reliability. Fourth, for eyes afflicted by a disease, the most prominent effect of the disease on iris recognition is to cause segmentation errors. To our knowledge this is the first database of iris images for disease-affected eyes made publicly available to researchers, and the most comprehensive study of what we can expect when the iris recognition is deployed for non-healthy eyes.","PeriodicalId":404972,"journal":{"name":"2015 IEEE 7th International Conference on Biometrics Theory, Applications and Systems (BTAS)","volume":"445 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123430663","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Joseph Roth, Andrew Carriveau, Xiaoming Liu, Anil K. Jain
{"title":"Learning-based ballistic breech face impression image matching","authors":"Joseph Roth, Andrew Carriveau, Xiaoming Liu, Anil K. Jain","doi":"10.1109/BTAS.2015.7358774","DOIUrl":"https://doi.org/10.1109/BTAS.2015.7358774","url":null,"abstract":"Ballistic images of a cartridge case or bullet carry distinct “fingerprints” of the firearm, which is the foundation of widely used forensic examination in criminal investigations. In recent years, prior work has explored the effectiveness of correlation-based approaches in matching ballistic imagery. However, most of these studies focused on highly controlled situations and used relatively simple image processing techniques, without leveraging supervised learning schemes. Toward improving the matching accuracy, especially on operational data, we propose a learning-based approach to compute the similarity between two ballistic images with breech face impressions. Specifically, after a global alignment between the reference and probe images, we unroll them into the polar coordinate for robust feature extraction and global registration. A gentleBoost-based learning scheme selects an optimal set of local cells, each constituting a weak classifier using the cross-correlation function. Experimental results and comparison with state-of-the-art methods on the NIST database and a new operational database, OFL, obtained from Michigan State Forensics Laboratory demonstrate the viability of our approach.","PeriodicalId":404972,"journal":{"name":"2015 IEEE 7th International Conference on Biometrics Theory, Applications and Systems (BTAS)","volume":"85 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126767202","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Information-theoretic performance evaluation of likelihood-ratio based biometric score fusion under modality selection attacks","authors":"Takao Murakami, Kenta Takahashi","doi":"10.1109/BTAS.2015.7358767","DOIUrl":"https://doi.org/10.1109/BTAS.2015.7358767","url":null,"abstract":"Likelihood-ratio based biometric score fusion is gaining much attention, since it maximizes accuracy if a log-likelihood ratio (LLR) is correctly estimated. It can also handle some missing query samples due to adverse physical conditions (e.g. injuries, illness) by setting the corresponding LLRs to 0. In this paper, we refer to the mode that allows missing query samples in such a way as a “modality selection mode”, and clarify a problem with the accuracy in this mode. We firstly propose a “modality selection attack”, which inputs only query samples whose LLRs are more than 0 (i.e. takes an optimal strategy) to impersonate others. We secondly consider the case when both genuine users and impostors take this optimal strategy, and prove information-theoretically that the overall accuracy in this case is “worse” than that in the case when they input all query samples. Specifically, we prove, both theoretically and experimentally, that the KL (Kullback-Leibler) divergence between a genuine distribution of integrated scores and an impostor's one, which can be compared with password entropy, is smaller in the former case. We also show quantitatively to what extent the KL divergence losses.","PeriodicalId":404972,"journal":{"name":"2015 IEEE 7th International Conference on Biometrics Theory, Applications and Systems (BTAS)","volume":"60 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116271380","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
He Zheng, Liao Ni, Ran Xian, Shilei Liu, Wenxin Li
{"title":"BMDT: An optimized method for Biometric Menagerie Detection","authors":"He Zheng, Liao Ni, Ran Xian, Shilei Liu, Wenxin Li","doi":"10.1109/BTAS.2015.7358751","DOIUrl":"https://doi.org/10.1109/BTAS.2015.7358751","url":null,"abstract":"Biometric menagerie is an important phenomenon in biometric systems, which focuses on distinguishing the minority of people who perform poorly and cause the majority of the errors (FAR and FRR). It can help to evaluate biometric systems and improve their performance by analyzing the animal like users. The fundamental step of this theory is the detection of animals. If the detection is not accurate, it may lead to potential problems. However, the current theories carried out by Doddington et al. (1998) and Yager (2008) both neglected the threshold in biometric systems when detecting animals, which might reduce the accuracy of animal detection. To verify this conjecture, we apply the above two theories to detect the existence of animals on a special finger vein database PFVD - Perfect Finger Vein Database. The characteristic of PFVD is that its accuracy is 100%, indicating zero FAR and zero FRR. From the intuitive point of view, there should exist no goat, lamb or wolf in Doddington's menagerie, and no worm, chameleon or phantom in Yager's menagerie. However, the experiments show the negative results, implying that the current theories are not perfect on animal detection. This paper analyzes the two theories and proposes BMDT - Biometric Menagerie Detection with Threshold, an optimized method based on Yager. By taking threshold into account, BMDT makes a significant improvement on the accuracy of animal detection, compared to the current theory. We apply BMDT on PFVD, and the results show that the falsely detected animals by Yager's method are removed. In addition, we evaluate BMDT in 3 more general cases, proving the advantage of the proposed method.","PeriodicalId":404972,"journal":{"name":"2015 IEEE 7th International Conference on Biometrics Theory, Applications and Systems (BTAS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128906297","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
J. Matey, G. W. Quinn, P. Grother, Elham Tabassi, C. Watson, J. Wayman
{"title":"Modest proposals for improving biometric recognition papers","authors":"J. Matey, G. W. Quinn, P. Grother, Elham Tabassi, C. Watson, J. Wayman","doi":"10.1109/BTAS.2015.7358778","DOIUrl":"https://doi.org/10.1109/BTAS.2015.7358778","url":null,"abstract":"We present practical recommendations for improving the clarity, transparency, and usefulness of many biometric papers. Several of the recommendations can be enabled by preparing a publicly available library of state of the art Receiver Operating Characteristics (ROCs). We propose such a library and invite suggestions on its details.","PeriodicalId":404972,"journal":{"name":"2015 IEEE 7th International Conference on Biometrics Theory, Applications and Systems (BTAS)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114884274","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A deep pyramid Deformable Part Model for face detection","authors":"Rajeev Ranjan, Vishal M. Patel, R. Chellappa","doi":"10.1109/BTAS.2015.7358755","DOIUrl":"https://doi.org/10.1109/BTAS.2015.7358755","url":null,"abstract":"We present a face detection algorithm based on Deformable Part Models and deep pyramidal features. The proposed method called DP2MFD is able to detect faces of various sizes and poses in unconstrained conditions. It reduces the gap in training and testing of DPM on deep features by adding a normalization layer to the deep convolutional neural network (CNN). Extensive experiments on four publicly available unconstrained face detection datasets show that our method is able to capture the meaningful structure of faces and performs significantly better than many competitive face detection algorithms.","PeriodicalId":404972,"journal":{"name":"2015 IEEE 7th International Conference on Biometrics Theory, Applications and Systems (BTAS)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-08-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127494784","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}