{"title":"Facial landmark detection on thermal data via fully annotated visible-to-thermal data synthesis","authors":"Khawla Mallat, J. Dugelay","doi":"10.1109/IJCB48548.2020.9304854","DOIUrl":"https://doi.org/10.1109/IJCB48548.2020.9304854","url":null,"abstract":"Thermal imaging has substantially evolved, during the recent years, to be established as a complement, or even occasionally as an alternative to conventional visible light imaging, particularly for face analysis applications. Facial landmark detection is a crucial prerequisite for facial image processing. Given the upswing of deep learning based approaches, the performance of facial landmark detection has been significantly improved. However, this uprise is merely limited to visible spectrum based face analysis tasks, as there are only few research works on facial landmark detection in thermal spectrum. This limitation is mainly due to the lack of available thermal face databases provided with full facial landmark annotations. In this paper, we propose to tackle this data shortage by converting existing face databases, designed for facial landmark detection task, from visible to thermal spectrum that will share the same provided facial landmark annotations. Using the synthesized thermal databases along with the facial landmark annotations, two different models are trained using active appearance models and deep alignment network. Evaluating the models trained on synthesized thermal data on real thermal data, we obtained facial landmark detection accuracy of 94.59% when tested on low quality thermal data and 95.63% when tested on high quality thermal data with a detection threshold of 0.15×IOD.","PeriodicalId":417270,"journal":{"name":"2020 IEEE International Joint Conference on Biometrics (IJCB)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114504853","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Is Face Recognition Safe from Realizable Attacks?","authors":"Sanjay Saha, T. Sim","doi":"10.1109/IJCB48548.2020.9304864","DOIUrl":"https://doi.org/10.1109/IJCB48548.2020.9304864","url":null,"abstract":"Face recognition is a popular form of biometric authentication and due to its widespread use, attacks have become more common as well. Recent studies show that Face Recognition Systems are vulnerable to attacks and can lead to erroneous identification of faces. Interestingly, most of these attacks are white-box, or they are manipulating facial images in ways that are not physically realizable. In this paper, we propose an attack scheme where the attacker can generate realistic synthesized face images with subtle perturbations and physically realize that onto his face to attack black-box face recognition systems. Comprehensive experiments and analyses show that subtle perturbations realized on attackers face can create successful attacks on state-of-the-art face recognition systems in black-box settings. Our study exposes the underlying vulnerability posed by the Face Recognition Systems against realizable black-box attacks.","PeriodicalId":417270,"journal":{"name":"2020 IEEE International Joint Conference on Biometrics (IJCB)","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117287703","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jascha Kolberg, Alexandru-Cosmin Vasile, M. Gomez-Barrero, C. Busch
{"title":"Analysing the Performance of LSTMs and CNNs on 1310 nm Laser Data for Fingerprint Presentation Attack Detection","authors":"Jascha Kolberg, Alexandru-Cosmin Vasile, M. Gomez-Barrero, C. Busch","doi":"10.1109/IJCB48548.2020.9304888","DOIUrl":"https://doi.org/10.1109/IJCB48548.2020.9304888","url":null,"abstract":"Due to the wide operational deployment of biometric recognition systems, presentation attacks targeting the capture device have become a severe threat. Especially for fingerprint recognition, a high number of different materials allows the creation of numerous presentation attack instruments (PAIs) in the form of full fake fingers and fingerprint overlays, which very much resemble the skin properties at fingertips. As a consequence, automated presentation attack detection (PAD) mechanisms are of utmost importance. Utilising a 1310 nm laser in a new capture device, we present an evaluation of three long short-term memory (LSTM) networks in comparison to eight convolutional neural networks (CNNs) on a database comprising over 22,000 samples and including 45 different PAI species. The LSTMs analyse temporal properties within a captured sequence in order to detect blood movement, while the CNNs take into account spatial properties within a single frame to focus on reflections by the PAI material. The results show that the diversity of PAI species is too big for a single classifier to correctly detect all presentation attacks. However, by fusing the scores from distinct algorithms, we can achieve a detection accuracy of 3.71% APCER for a convenient BPCER of 0.2%.","PeriodicalId":417270,"journal":{"name":"2020 IEEE International Joint Conference on Biometrics (IJCB)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127196382","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Atsuya Sakata, Yasushi Makihara, Noriko Takemura, D. Muramatsu, Y. Yagi
{"title":"How Confident Are You in Your Estimate of a Human Age? Uncertainty-aware Gait-based Age Estimation by Label Distribution Learning","authors":"Atsuya Sakata, Yasushi Makihara, Noriko Takemura, D. Muramatsu, Y. Yagi","doi":"10.1109/IJCB48548.2020.9304914","DOIUrl":"https://doi.org/10.1109/IJCB48548.2020.9304914","url":null,"abstract":"Gait-based age estimation is one of key techniques for many applications (e.g., finding lost children/aged wanders). It is well known that the age estimation uncertainty is highly dependent on ages (i.e., it is generally small for children while is large for adults/the elderly), and it is important to know the uncertainty for the above-mentioned applications. We therefore propose a method of uncertainty-aware gait-based age estimation by introducing a label distribution learning framework. More specifically, we design a network which takes an appearance-based gait feature as an input and outputs discrete label distributions in the integer age domain. Experiments with the world-largest gait database OULP-Age show that the proposed method can successfully represent the uncertainty of age estimation and also outperforms or is comparable to the state-of-the-art methods.","PeriodicalId":417270,"journal":{"name":"2020 IEEE International Joint Conference on Biometrics (IJCB)","volume":"122 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131162063","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Copyright","authors":"","doi":"10.1109/ijcb48548.2020.9304915","DOIUrl":"https://doi.org/10.1109/ijcb48548.2020.9304915","url":null,"abstract":"","PeriodicalId":417270,"journal":{"name":"2020 IEEE International Joint Conference on Biometrics (IJCB)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129945644","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Silvia Makowski, L. Jäger, Paul Prasse, T. Scheffer
{"title":"Biometric Identification and Presentation-Attack Detection using Micro- and Macro-Movements of the Eyes","authors":"Silvia Makowski, L. Jäger, Paul Prasse, T. Scheffer","doi":"10.1109/IJCB48548.2020.9304900","DOIUrl":"https://doi.org/10.1109/IJCB48548.2020.9304900","url":null,"abstract":"We study involuntary micro-movements of both eyes, in addition to saccadic macro-movements, as biometric characteristic. We develop a deep convolutional neural network that processes binocular oculomotoric signals and identifies the viewer. In order to be able to detect presentation attacks, we develop a model in which the movements are a response to a controlled stimulus. The model detects replay attacks by processing both the controlled but randomized stimulus and the ocular response to this stimulus. We acquire eye movement data from 150 participants, with 4 sessions per participant. We observe that the model detects replay attacks reliably; compared to prior work, the model attains substantially lower error rates.","PeriodicalId":417270,"journal":{"name":"2020 IEEE International Joint Conference on Biometrics (IJCB)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134167626","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"LDM-DAGSVM: Learning Distance Metric via DAG Support Vector Machine for Ear Recognition Problem","authors":"Ibrahim Omara, Guangzhi Ma, E. Song","doi":"10.1109/IJCB48548.2020.9304871","DOIUrl":"https://doi.org/10.1109/IJCB48548.2020.9304871","url":null,"abstract":"Recently, the ear recognition system takes more increasingly interesting for many applications, especially, in immigration system, forensic, and surveillance applications. For face re-identification and image classification, metric learning has significantly improved machine learning accuracies by using K-Nearest Neighbor (KNN) and Support Vector Machine (SVM) classifiers. However, metric learning via SVM has not yet been investigated for the ear recognition problem. To achieve better generalization ability than the traditional previous classifiers, a novel framework for ear recognition is proposed based on learning distance metric (LDM) via SVM since the LDM and the directed acyclic graph SVM (DAGSVM) are two emerging techniques which perform outstanding in dealing with classification problems. This work considers metric learning for SVM by proposing a hybrid learning distance metric and directed acyclic graph SVM (LDM-DAGSVM) model for ear recognition system. Different from existing ear biometric methods, the proposed approach aims to learn a Mahalanobis distance metric via SVM to maximize the inter-class variations and minimize the intra-class variations, simultaneously. The experiments are conducted on complicated ear datasets and the results can achieve better performance compared with the state-of-the-art ear recognition methods. The proposed approach can get classification accuracy up to 98.79%, 98.70%, and 84.30% for AWE, AME and WPUT ear datasets, respectively.","PeriodicalId":417270,"journal":{"name":"2020 IEEE International Joint Conference on Biometrics (IJCB)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134050629","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"DVRNet: Decoupled Visible Region Network for Pedestrian Detection","authors":"Lei Shi, Charles Livermore, I. Kakadiaris","doi":"10.1109/IJCB48548.2020.9304883","DOIUrl":"https://doi.org/10.1109/IJCB48548.2020.9304883","url":null,"abstract":"Pedestrian detection remains a challenging task due to the problems caused by occlusion variance. Visible-body bounding boxes are typically used as an extra supervision signal to improve the performance of pedestrian detection to predict the full-body. However, visible-body assisted approaches produce a large number of false positives, which result from a lack of adequate and discriminative full-body contextual information. In this paper, we propose a new network, dubbed DVRNet, based on the representative visible-body assisted pedestrian detector named Bi-box. Specifically, we extend Bi-box by adding three modules named the attention-based feature interleaver module (AFIM), the binary mask learning module (BMLM), and the head-aware feature enhancement module (HFEM), which play important roles in employing features learned by the visible-body and the head supervision signals to enrich high discriminative contextual information of the full-body and enhance the power of feature representation. Experimental results indicate that the DVRNet achieves promising results on the CityPersons and the CrowdHuman datasets.","PeriodicalId":417270,"journal":{"name":"2020 IEEE International Joint Conference on Biometrics (IJCB)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129385435","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"DBLFace: Domain-Based Labels for NIR-VIS Heterogeneous Face Recognition","authors":"Ha A. Le, I. Kakadiaris","doi":"10.1109/IJCB48548.2020.9304884","DOIUrl":"https://doi.org/10.1109/IJCB48548.2020.9304884","url":null,"abstract":"Deep learning-based domain-invariant feature learning methods are advancing in near-infrared and visible (NIR-VIS) heterogeneous face recognition. However, these methods are prone to overfitting due to the large intra-class variation and the lack of NIR images for training. In this paper, we introduce Domain-Based Label Face (DBLFace), a learning approach based on the assumption that a subject is not represented by a single label but by a set of labels. Each label represents images of a specific domain. In particular, a set of two labels per subject, one for the NIR images and one for the VIS images, are used for training a NIR-VIS face recognition model. The classification of images into different domains reduces the intra-class variation and lessens the negative impact of data imbalance in training. To train a network with sets of labels, we introduce a domain-based angular margin loss and a maximum angular loss to maintain the inter-class discrepancy and to enforce the close relationship of labels in a set. Quantitative experiments confirm that DBLFace significantly improves the rank-1 identification rate by 6.7% on the EDGE20 dataset and achieves state-of-the-art performance on the CASIA NIR-VIS 2.0 dataset.","PeriodicalId":417270,"journal":{"name":"2020 IEEE International Joint Conference on Biometrics (IJCB)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129778763","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Partial Fingerprint Verification via Spatial Transformer Networks","authors":"Zhiyuan He, Eryun Liu, Z. Xiang","doi":"10.1109/IJCB48548.2020.9304877","DOIUrl":"https://doi.org/10.1109/IJCB48548.2020.9304877","url":null,"abstract":"Partial fingerprint verification is a challenging task because of the few features contained in small area as well as the large rotation angle and translation between query images and template images. In this paper, we propose a new framework of partial fingerprint verification based on spatial transformer networks (STN) model, where a transform model, i.e., AlignNet network, is proposed to estimate the alignment parameters, and the verification is modeled as a binary classification task. The experimental results on the simulated datasets created from FVC2004 and the real-world dataset FVC2006 DB1 show that our method is invariant to rotation, and also robust to different kinds of scanners, and dramatically outperforms the rank-1 entry of FVC2006 participants. The EER on FVC2006 DB1 of the proposed algorithm is 3.587% compared to that of 5.564%, the best of FVC2006 DB1 entries.","PeriodicalId":417270,"journal":{"name":"2020 IEEE International Joint Conference on Biometrics (IJCB)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125389008","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}