{"title":"A Meta-Recognition Based Skin Marks Matching Algorithm with Feature Fusion for Forensic Identification","authors":"Peicong Yu, A. Kong","doi":"10.1109/ICB2018.2018.00027","DOIUrl":"https://doi.org/10.1109/ICB2018.2018.00027","url":null,"abstract":"Soft biometrics, such as skin marks, play an important role in forensic identification, for they cannot only supplement hard biometrics to improve the overall identification performance, but may also serve as supportive evidence when hard biometrics is not available. Skin marks are small and difficult to be accurately detected due to different lighting conditions, poses as well as individual variation in their skin marks. In this paper, we propose a meta-recognition based skin marks matching algorithm to address these challenges for forensic identification. The algorithm combines both the geometric information in spatial distribution of skin marks and the appearance information of individual skin mark to establish the correspondence between two images. A multi-level skin marks matching scheme is adopted and fusion of scores is carried out at different levels using a meta-recognition method. The experimental results show that the new algorithm provides over 22% of improvement in terms of rank-1 accuracy over the previously proposed method.","PeriodicalId":130957,"journal":{"name":"2018 International Conference on Biometrics (ICB)","volume":"555 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131626371","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ramachandra Raghavendra, K. Raja, S. Venkatesh, Sneha Hegde, Shreedhar D. Dandappanavar, C. Busch
{"title":"Verifying the Newborns without Infection Risks Using Contactless Palmprints","authors":"Ramachandra Raghavendra, K. Raja, S. Venkatesh, Sneha Hegde, Shreedhar D. Dandappanavar, C. Busch","doi":"10.1109/ICB2018.2018.00040","DOIUrl":"https://doi.org/10.1109/ICB2018.2018.00040","url":null,"abstract":"Verification of new-born babies utilizing the biometric characteristics has received an increased attention, especially in applications such as law enforcement, vaccination tracking, and medical services. In this work, we present an introductory study on exploring contactless palmprint biometric for the verification of new-borns. To the best of our knowledge, this is the first work to explore automatic contactless palmprint verification of new-born babies. We have captured a new database of contactless palmprint images from 50 new-born babies in two different sessions. The first session data is captured between 6-8 hours after the birth and the second session data is captured between 28-36 hours after the birth. Extensive experiments are carried out using seven different state-of-the-art palmprint algorithms to benchmark both left and right contactless palmprint characteristics captured from the new-born babies. We further propose a new method based on transfer learning by fine-tuning the pre-trained AlexNet architecture to improve the verification accuracy. Our experiments have demonstrated improved results using proposed scheme and thereby indicate the benefit of the contactless palmprint data to verify the identity of the new-born babies.","PeriodicalId":130957,"journal":{"name":"2018 International Conference on Biometrics (ICB)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130626298","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Fully Associative Patch-Based 1-to-N Matcher for Face Recognition","authors":"Lingfeng Zhang, I. Kakadiaris","doi":"10.1109/ICB2018.2018.00032","DOIUrl":"https://doi.org/10.1109/ICB2018.2018.00032","url":null,"abstract":"This paper focuses on improving face recognition performance by a patch-based 1-to-N signature matcher that learns correlations between different facial patches. A Fully Associative Patch-based Signature Matcher (FAPSM) is proposed so that the local matching identity of each patch contributes to the global matching identities of all the patches. The proposed matcher consists of three steps. First, based on the signature, the local matching identity and the corresponding matching score of each patch are computed. Then, a fully associative weight matrix is learned to obtain the global matching identities and scores of all the patches. At last, the l1-regularized weighting is applied to combine the global matching identity of each patch and obtain a final matching identity. The proposed matcher has been integrated with the UR2D system for evaluation. The experimental results indicate that the proposed matcher achieves better performance than the current UR2D system. The Rank-1 accuracy is improved significantly by 3% and 0.55% on the UHDB31 dataset and the IJB-A dataset, respectively.","PeriodicalId":130957,"journal":{"name":"2018 International Conference on Biometrics (ICB)","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122160786","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Exploiting Linguistic Style as a Cognitive Biometric for Continuous Verification","authors":"T. Neal, Kalaivani Sundararajan, D. Woodard","doi":"10.1109/ICB2018.2018.00048","DOIUrl":"https://doi.org/10.1109/ICB2018.2018.00048","url":null,"abstract":"This paper presents an assessment of continuous verification using linguistic style as a cognitive biometric. In stylometry, it is widely known that linguistic style is highly characteristic of authorship using representations that capture authorial style at character, lexical, syntactic, and semantic levels. In this work, we provide a contrast to previous efforts by implementing a one-class classification problem using Isolation Forests. Our approach demonstrates the usefulness of this classifier for accurately verifying the genuine user, and yields recognition accuracy exceeding 98% using very small training samples of 50 and 100-character blocks.","PeriodicalId":130957,"journal":{"name":"2018 International Conference on Biometrics (ICB)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126501191","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
N. Vetrekar, K. Raja, Ramachandra Raghavendra, R. Gad, C. Busch
{"title":"Multi-spectral Imaging for Robust Ocular Biometrics","authors":"N. Vetrekar, K. Raja, Ramachandra Raghavendra, R. Gad, C. Busch","doi":"10.1109/ICB2018.2018.00038","DOIUrl":"https://doi.org/10.1109/ICB2018.2018.00038","url":null,"abstract":"Recent development of sensors has allowed to explore the possibility of biometric authentication beyond visible spectrum.Particularly, multi-spectral imaging has shown a great potential in biometrics to work robustly under unknown varying illumination conditions for face recognition. While face biometrics in traditional settings has also indicated the applicability of ocular regions for improving the recognition performance, there are not many works that have explored recent imaging techniques. In this paper, we present a study that explores the possibility of recognizing ocular biometric features using multi-spectral imaging. While exploring the possibility of recognizing the periocular region in different spectral bands, this work also presents the performance variation of periocular region for cross-spectral recognition. We have captured a new ocular image database in eight narrow spectral bands across Visible (VIS) and Near-Infra-Red (NIR) spectrum (530nm to 1000nm) using our custom built sensor. The database consists of images from 52 subjects with a sample size of 4160 spectral band images captured in two different sessions. The extensive set of experimental evaluation obtained on the state-of-the-art methods indicate highest recognition rate of 96.92% at Rank-1, demonstrating the potential of multi-spectral imaging for robust periocular recognition.","PeriodicalId":130957,"journal":{"name":"2018 International Conference on Biometrics (ICB)","volume":"94 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125672309","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Fine-Grained Multi-Attribute Adversarial Learning for Face Generation of Age, Gender and Ethnicity","authors":"Lipeng Wan, Jun Wan, Yi Jin, Zichang Tan, S. Li","doi":"10.1109/ICB2018.2018.00025","DOIUrl":"https://doi.org/10.1109/ICB2018.2018.00025","url":null,"abstract":"Since the Generative Adversarial Network (GAN) was proposed, facial image generation used for face recognition has been studied in recent two years. However, there are few GAN-based methods applied for fine-grained facial attribute analysis, such as face generation with precise age. In this paper, fine-grained multi-attribute GAN (FM-GAN) is presented, which can generate fine-grained face image under specific multiply attributes, such as 30-year-old white man. It shows that the proposed FM-GAN with fine-grained multi-label conditions is better than conditional GAN (cGAN) in terms of image visual fidelity. Besides, synthetic images generated by FM-GAN are used for data augmentation for face attribute analysis. Experiments also demonstrate that synthetic images can assist the CNN training and relieve the problem of insufficient data.","PeriodicalId":130957,"journal":{"name":"2018 International Conference on Biometrics (ICB)","volume":"139 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127333535","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"On Effectiveness of Anomaly Detection Approaches against Unseen Presentation Attacks in Face Anti-spoofing","authors":"O. Nikisins, A. Mohammadi, André Anjos, S. Marcel","doi":"10.1109/ICB2018.2018.00022","DOIUrl":"https://doi.org/10.1109/ICB2018.2018.00022","url":null,"abstract":"While face recognition systems got a significant boost in terms of recognition performance in recent years, they are known to be vulnerable to presentation attacks. Up to date, most of the research in the field of face anti-spoofing or presentation attack detection was considered as a two-class classification task: features of bona-fide samples versus features coming from spoofing attempts. The main focus has been on boosting the anti-spoofing performance for databases with identical types of attacks across both training and evaluation subsets. However, in realistic applications the types of attacks are likely to be unknown, potentially occupying a broad space in the feature domain. Therefore, a failure to generalize on unseen types of attacks is one of the main potential challenges in existing anti-spoofing approaches. First, to demonstrate the generalization issues of two-class anti-spoofing systems we establish new evaluation protocols for existing publicly available databases. Second, to unite the data collection efforts of various institutions we introduce a challenging Aggregated database composed of 3 publicly available datasets: Replay-Attack, Replay-Mobile and MSU MFSD, reporting the performance on it. Third, considering existing limitations we propose a number of systems approaching a task of presentation attack detection as an anomaly detection, or a one-class classification problem, using only bona-fide features in the training stage. Using less training data, hence requiring less effort in the data collection, the introduced approach demonstrates a better generalization properties against previously unseen types of attacks on the proposed Aggregated database.","PeriodicalId":130957,"journal":{"name":"2018 International Conference on Biometrics (ICB)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133659789","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
E. González-Sosa, R. Vera-Rodríguez, Julian Fierrez, Vishal M. Patel
{"title":"Person Recognition beyond the Visible Spectrum: Combining Body Shape and Texture from mmW Images","authors":"E. González-Sosa, R. Vera-Rodríguez, Julian Fierrez, Vishal M. Patel","doi":"10.1109/ICB2018.2018.00044","DOIUrl":"https://doi.org/10.1109/ICB2018.2018.00044","url":null,"abstract":"Biometrics have been tradittionally based on images acquired in the visible spectrum. In this paper, we will go first into details regarding the regions beyond the visible spectrum that have been already explored in the literature for biometrics to overcome some of the limitations found in the visible region. Later, we will introduce millimeter imaging as a new region of the spectrum that has also potential in biometrics.~To this aim, we first consider shape and texture information individually for person recognition. Later, we compare them and study to what extent the joint use of shape and texture can provide further improvements. Results suggest that both sources of information can complement each other, reaching verification results of 1.5% EER. This result motivates us to think that in the future, person recognition can be integrated within the millimeter screening scanners already deployed in airports, and enhance this way security.","PeriodicalId":130957,"journal":{"name":"2018 International Conference on Biometrics (ICB)","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124816487","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jinlin Wu, Hailin Shi, Shu Zhang, Zhen Lei, Yang Yang, S. Li
{"title":"De-Mark GAN: Removing Dense Watermark with Generative Adversarial Network","authors":"Jinlin Wu, Hailin Shi, Shu Zhang, Zhen Lei, Yang Yang, S. Li","doi":"10.1109/ICB2018.2018.00021","DOIUrl":"https://doi.org/10.1109/ICB2018.2018.00021","url":null,"abstract":"This paper mainly considers the MeshFace verification problem with dense watermarks. A dense watermark often covers the crucial parts of face photo, thus degenerating the performance in the existing face verification system. The key to solving it is to preserve the ID information while removing the dense watermark. In this paper, we propose an improved GAN model, named De-mark GAN, for MeshFace verification. It consists of one generator and one global-internal discriminator. The generator is an encoderdecoder architecture with a pixel reconstruction loss and a feature loss. It maps a MeshFace photo to a representation vector, and then decodes the vector to a RGB ID photo. The succedent global-internal discriminator integrates a global discriminator and an internal discriminator with a global loss and internal loss, respectively. It can ensure the generated image quality and preserve the the ID information of recovered ID photos. Experimental results show that the verification benefits well from the recovered ID photos with high quality and our proposed De-mark GAN can achieve a competitive result in both image quality and verification.","PeriodicalId":130957,"journal":{"name":"2018 International Conference on Biometrics (ICB)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121476112","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zihui Yan, Lingxiao He, Man Zhang, Zhenan Sun, T. Tan
{"title":"Hierarchical Multi-class Iris Classification for Liveness Detection","authors":"Zihui Yan, Lingxiao He, Man Zhang, Zhenan Sun, T. Tan","doi":"10.1109/ICB2018.2018.00018","DOIUrl":"https://doi.org/10.1109/ICB2018.2018.00018","url":null,"abstract":"In modern society, iris recognition has become increasingly popular. The security risk of iris recognition is increasing rapidly because of the attack by various patterns of fake iris. A German hacker organization called Chaos Computer Club cracked the iris recognition system of Samsung Galaxy S8 recently. In view of these risks, iris liveness detection has shown its significant importance to iris recognition systems. The state-of-the-art algorithms mainly rely on hand-crafted texture features which can only identify fake iris images with single pattern. In this paper, we proposed a Hierarchical Multi-class Iris Classification (HMC) for liveness detection based on CNN. HMC mainly focuses on iris liveness detection of multi-pattern fake iris. The proposed method learns the features of different fake iris patterns by CNN and classifies the genuine or fake iris images by hierarchical multi-class classification. This classification takes various characteristics of different fake iris patterns into account. All kinds of fake iris patterns are divided into two categories by their fake areas. The process is designed as two steps to identify two categories of fake iris images respectively. Experimental results demonstrate an extremely higher accuracy of iris liveness detection than other state-of-the-art algorithms. The proposed HMC remarkably achieves the best results with nearly 100% accuracy on ND-Contact, CASIA-Iris-Interval, CASIA-Iris-Syn and LivDet-Iris-2017-Warsaw datasets. The method also achieves the best results with 100% accuracy on a hybrid dataset which consists of ND-Contact and LivDet-Iris-2017-Warsaw datasets.","PeriodicalId":130957,"journal":{"name":"2018 International Conference on Biometrics (ICB)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125748046","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}