{"title":"Protecting Your Faces: MeshFaces Generation and Removal via High-Order Relation-Preserving CycleGAN","authors":"Zhihang Li, Yibo Hu, Man Zhang, Min Xu, R. He","doi":"10.1109/ICB2018.2018.00020","DOIUrl":"https://doi.org/10.1109/ICB2018.2018.00020","url":null,"abstract":"Protecting person's face photos from being misused has been an important issue as the rapid development of ubiquitous face sensors. MeshFaces provide a simple and inexpensive way to protect facial photos and have been widely used in China. This paper treats MeshFace generation and removal as a dual learning problem and proposes a high-order relation-preserving CycleGAN framework to solve this problem. First, dual transformations between the distributions of MeshFaces and clean faces in pixel space are learned under the CycleGAN framework, which can efficiently utilize unpaired data. Then, a novel High-order Relation-preserving (HR) loss is imposed on CycleGAN to recover the finer texture details and generate much sharper images. Different from the L1 and L2 losses that result in image smoothness and blurry, the HR loss can better capture the appearance variation of MeshFaces and hence facilitates removal. Moreover, Identity Preserving loss is proposed to preserve both global and local identity information. Experimental results on three databases demonstrate that our approach is highly effective for MeshFace generation and removal.","PeriodicalId":130957,"journal":{"name":"2018 International Conference on Biometrics (ICB)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132086458","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Shedding Light on the Veins - Reflected Light or Transillumination in Hand-Vein Recognition","authors":"Christof Kauba, A. Uhl","doi":"10.1109/ICB2018.2018.00050","DOIUrl":"https://doi.org/10.1109/ICB2018.2018.00050","url":null,"abstract":"The near-infrared light source is a crucial part of a hand-vein scanner. Depending on its position there are two main illumination types: reflected light and transillumination. Commercial scanners and all publicly available data sets use reflected light. We established two dual illumination dorsal hand-vein data sets (one of them is made publicly available) including both, reflected light and transillumination images acquired using the same subjects, hand position and environmental conditions. This enables a direct comparison of both illumination scenarios as well as cross-illumination matching. Several experiments utilising common hand-vein recognition algorithms were carried out to quantify the recognition performance in each of the illumination scenarios and in the cross-illumination matching one.","PeriodicalId":130957,"journal":{"name":"2018 International Conference on Biometrics (ICB)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132196138","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Predicting Eye Color from Near Infrared Iris Images","authors":"Denton Bobeldyk, A. Ross","doi":"10.1109/ICB2018.2018.00026","DOIUrl":"https://doi.org/10.1109/ICB2018.2018.00026","url":null,"abstract":"Iris recognition systems typically acquire images of the iris in the near-infrared (NIR) spectrum rather than the visible spectrum. The use of NIR imaging facilitates the extraction of texture even from darker color irides (e.g., brown eyes). While NIR sensors reveal the textural details of the iris, the pigmentation and color details that are normally observed in the visible spectrum are subdued. In this work, we develop a method to predict the color of the iris from NIR images. In particular, we demonstrate that it is possible to distinguish between light-colored irides (blue, green, hazel) and dark-colored irides (brown) in the NIR spectrum by using the BSIF texture descriptor. Experiments on the BioCOP 2009 dataset containing over 43,000 iris images indicate that it is possible to distinguish between these two categories of eye color with an accuracy of 90%. This suggests that the structure and texture of the iris as manifested in 2D NIR iris images divulges information about the pigmentation and color of the iris.","PeriodicalId":130957,"journal":{"name":"2018 International Conference on Biometrics (ICB)","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121742170","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Improving Face Recognition by Exploring Local Features with Visual Attention","authors":"Yichun Shi, Anil K. Jain","doi":"10.1109/ICB2018.2018.00045","DOIUrl":"https://doi.org/10.1109/ICB2018.2018.00045","url":null,"abstract":"Over the past several years, the performance of state-of-the-art face recognition systems has been significantly improved, due in a large part to the increasing amount of available face datasets and the proliferation of deep neural networks. This rapid increase in performance has left existing popular performance evaluation protocols, such as standard LFW, nearly saturated and has motivated the emergence of new, more challenging protocols (aimed specifically towards unconstrained face recognition). In this work, we employ the use of parts-based face recognition models to further improve the performance of state-of-the-art face recognition systems as evaluated by both the LFW protocol, and the newer, more challenging protocols (BLUFR, IJB-A, and IJB-B). In particular, we employ spatial transformers to automatically localize discriminative facial parts which enables us to build an end-to-end network where global features and local features are fused together, making the final feature representation more discriminative. Experimental results, using these discriminative features, on the BLUFR, IJB-A and IJB-B protocols, show that the proposed approach is able to boost performance of state-of-the-art face recognition systems. The pro-posed approach is not limited to one architecture but can also be applied to other face recognition networks.","PeriodicalId":130957,"journal":{"name":"2018 International Conference on Biometrics (ICB)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133724177","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Context-Aware Fusion for Continuous Biometric Authentication","authors":"Divya Sivasankaran, M. Ragab, T. Sim, Yair Zick","doi":"10.1109/ICB2018.2018.00043","DOIUrl":"https://doi.org/10.1109/ICB2018.2018.00043","url":null,"abstract":"Continuous authentication using biometrics is receiving renewed attention owing to recent advances in mobile technology. However, the context in which biometric inputs are acquired can affect the quality of information available for authentication. For example, in multi-speaker environments, face or gait could be better authenticators than voice. Unfortunately, existing fusion methods do not take this into account. In this paper, we propose a novel fusion method that accounts for context, and that can operate at both decision and score levels. Theoretical bounds on the proposed method are presented along with experiments on synthetic and real multi-modal biometric data. The results show that our proposed method is better than commonly used fusion methods, even when using state-of-the-art deep learners. Moreover, our method outperforms score-level fusion methods even at the decision-level, debunking the common myth that decision-level fusion is inferior, and showcasing the power of contextual learning.","PeriodicalId":130957,"journal":{"name":"2018 International Conference on Biometrics (ICB)","volume":"127 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116051510","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
S. M. Iranmanesh, Ali Dabouei, Hadi Kazemi, N. Nasrabadi
{"title":"Deep Cross Polarimetric Thermal-to-Visible Face Recognition","authors":"S. M. Iranmanesh, Ali Dabouei, Hadi Kazemi, N. Nasrabadi","doi":"10.1109/ICB2018.2018.00034","DOIUrl":"https://doi.org/10.1109/ICB2018.2018.00034","url":null,"abstract":"In this paper, we present a deep coupled learning framework to address the problem of matching polarimetric thermal face photos against a gallery of visible faces. Polarization state information of thermal faces provides the missing textural and geometrics details in the thermal face imagery which exist in visible spectrum. we propose a coupled deep neural network architecture which leverages relatively large visible and thermal datasets to overcome the problem of overfitting and eventually we train it by a polarimetric thermal face dataset which is the first of its kind. The proposed architecture is able to make full use of the polarimetric thermal information to train a deep model compared to the conventional shallow thermal-to-visible face recognition methods. Proposed coupled deep neural network also finds global discriminative features in a nonlinear embedding space to relate the polarimetric thermal faces to their corresponding visible faces. The results show the superiority of our method compared to the state-of-the-art models in cross thermal-to-visible face recognition algorithms.","PeriodicalId":130957,"journal":{"name":"2018 International Conference on Biometrics (ICB)","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-01-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115980267","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ali Dabouei, Hadi Kazemi, S. M. Iranmanesh, J. Dawson, N. Nasrabadi
{"title":"Fingerprint Distortion Rectification Using Deep Convolutional Neural Networks","authors":"Ali Dabouei, Hadi Kazemi, S. M. Iranmanesh, J. Dawson, N. Nasrabadi","doi":"10.1109/ICB2018.2018.00012","DOIUrl":"https://doi.org/10.1109/ICB2018.2018.00012","url":null,"abstract":"Elastic distortion of fingerprints has a negative effect on the performance of fingerprint recognition systems. This negative effect brings inconvenience to users in authentication applications. However, in the negative recognition scenario where users may intentionally distort their fingerprints, this can be a serious problem since distortion will prevent recognition system from identifying malicious users. Current methods aimed at addressing this problem still have limitations. They are often not accurate because they estimate distortion parameters based on the ridge frequency map and orientation map of input samples, which are not reliable due to distortion. Secondly, they are not efficient and requiring significant computation time to rectify samples. In this paper, we develop a rectification model based on a Deep Convolutional Neural Network (DCNN) to accurately estimate distortion parameters from the input image. Using a comprehensive database of synthetic distorted samples, the DCNN learns to accurately estimate distortion bases ten times faster than the dictionary search methods used in the previous approaches. Evaluating the proposed method on public databases of distorted samples shows that it can significantly improve the matching performance of distorted samples.","PeriodicalId":130957,"journal":{"name":"2018 International Conference on Biometrics (ICB)","volume":"514 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-01-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131921403","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Robust Minutiae Extractor: Integrating Deep Networks and Fingerprint Domain Knowledge","authors":"Dinh-Luan Nguyen, Kai Cao, Anil K. Jain","doi":"10.1109/ICB2018.2018.00013","DOIUrl":"https://doi.org/10.1109/ICB2018.2018.00013","url":null,"abstract":"We propose a fully automatic minutiae extractor, called MinutiaeNet, based on deep neural networks with compact feature representation for fast comparison of minutiae sets. Specifically, first a network, called CoarseNet, estimates the minutiae score map and minutiae orientation based on convolutional neural network and fingerprint domain knowledge (enhanced image, orientation field, and segmentation map). Subsequently, another network, called FineNet, refines the candidate minutiae locations based on score map. We demonstrate the effectiveness of using the fingerprint domain knowledge together with the deep networks. Experimental results on both latent (NIST SD27) and plain (FVC 2004) public domain fingerprint datasets provide comprehensive empirical support for the merits of our method. Further, our method finds minutiae sets that are better in terms of precision and recall in comparison with state-of-the-art on these two datasets. Given the lack of annotated fingerprint datasets with minutiae ground truth, the proposed approach to robust minutiae detection will be useful to train network-based fingerprint matching algorithms as well as for evaluating fingerprint individuality at scale. MinutiaeNet is implemented in Tensorflow: https://github.com/luannd/MinutiaeNet","PeriodicalId":130957,"journal":{"name":"2018 International Conference on Biometrics (ICB)","volume":"82 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-12-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124334767","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Evaluation of PPG Biometrics for Authentication in Different States","authors":"Umang Yadav, S. N. Abbas, D. Hatzinakos","doi":"10.1109/ICB2018.2018.00049","DOIUrl":"https://doi.org/10.1109/ICB2018.2018.00049","url":null,"abstract":"Amongst all medical biometric traits, Photoplethysmograph (PPG) is the easiest to acquire. PPG records the blood volume change with just combination of Light Emitting Diode and Photodiode from any part of the body. With IoT and smart homes' penetration, PPG recording can easily be integrated with other vital wearable devices. PPG represents peculiarity of hemodynamics and cardiovascular system for each individual. This paper presents nonfiducial method for PPG based biometric authentication. Being a physiological signal, PPG signal alters with physical/mental stress and time. For robustness, these variations cannot be ignored. While, most of the previous works focused only on single session, this paper demonstrates extensive performance evaluation of PPG biometrics against single session data, different emotions, physical exercise and time-lapse using Continuous Wavelet Transform (CWT) and Direct Linear Discriminant Analysis (DLDA). When evaluated on different states and datasets, equal error rate (EER) of 0.5%-6% was achieved for 45-60s average training time. Our CWT/DLDA based technique outperformed all other dimensionality reduction techniques and previous works.","PeriodicalId":130957,"journal":{"name":"2018 International Conference on Biometrics (ICB)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-12-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129209162","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"TV-GAN: Generative Adversarial Network Based Thermal to Visible Face Recognition","authors":"Teng Zhang, A. Wiliem, Siqi Yang, B. Lovell","doi":"10.1109/ICB2018.2018.00035","DOIUrl":"https://doi.org/10.1109/ICB2018.2018.00035","url":null,"abstract":"This work tackles the face recognition task on images captured using thermal camera sensors which can operate in the non-light environment. While it can greatly increase the scope and benefits of the current security surveillance systems, performing such a task using thermal images is a challenging problem compared to face recognition task in the Visible Light Domain (VLD). This is partly due to the significantly smaller amount of thermal imagery data collected compared to the VLD data. Unfortunately, direct application of the existing very strong face recognition models trained using VLD data into the thermal imagery data will not produce a satisfactory performance. This is due to the existence of the domain gap between the thermal and VLD images. To this end, we propose a Thermal-to-Visible Generative Adversarial Network (TV-GAN) that is able to transform thermal face images into their corresponding VLD images whilst maintaining identity information which is sufficient enough for the existing VLD face recognition models to perform recognition. Some examples are presented in Figure 1. Unlike the previous methods, our proposed TV-GAN uses an explicit closed-set face recognition loss to regularize the discriminator network training. This information will then be conveyed into the generator network in the form of gradient loss. In the experiment, we show that by using this additional explicit regularization for the discriminator network, the TV-GAN is able to preserve more identity information when translating a thermal image of a person which is not seen before by the TV-GAN.","PeriodicalId":130957,"journal":{"name":"2018 International Conference on Biometrics (ICB)","volume":"235 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116390375","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}