C. Rathgeb, P. Drozdowski, Daniel Fischer, C. Busch
{"title":"Vulnerability Assessment and Detection of Makeup Presentation Attacks","authors":"C. Rathgeb, P. Drozdowski, Daniel Fischer, C. Busch","doi":"10.1109/IWBF49977.2020.9107961","DOIUrl":"https://doi.org/10.1109/IWBF49977.2020.9107961","url":null,"abstract":"The accuracy of face recognition systems can be negatively affected by facial cosmetics which have the ability to substantially alter the facial appearance. Recently, it was shown that makeup can also be abused to launch so-called makeup presentation attacks. In such attacks, an attacker might apply heavy makeup to achieve the facial appearance of a target subject for the purpose of impersonation.In this work, we assess the vulnerability of a widely used open-source face recognition system, i.e. ArcFace, to makeup presentation attacks using the publicly available Makeup Induced Face Spoofing (MIFS) and FRGCv2 databases. It is shown that the success rate of makeup presentation attacks in the MIFS database has negligible impact on the security of the face recognition system. Further, we employ image warping to simulate improved makeup presentation attacks which reveal a significantly higher success rate. Moreover, we propose a makeup attack detection scheme which compares face depth data with face depth reconstructions obtained from RGB images of potential makeup presentation attacks. Significant variations between the two sources of information indicate facial shape alterations induced by strong use of makeup, i.e. potential makeup presentation attacks. Conceptual experiments on the MIFS database confirm the soundness of the presented approach.","PeriodicalId":174654,"journal":{"name":"2020 8th International Workshop on Biometrics and Forensics (IWBF)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114787652","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
F. Boutros, N. Damer, K. Raja, Raghavendra Ramachandra, Florian Kirchbuchner, Arjan Kuijper
{"title":"Periocular Biometrics in Head-Mounted Displays: A Sample Selection Approach for Better Recognition","authors":"F. Boutros, N. Damer, K. Raja, Raghavendra Ramachandra, Florian Kirchbuchner, Arjan Kuijper","doi":"10.1109/IWBF49977.2020.9107939","DOIUrl":"https://doi.org/10.1109/IWBF49977.2020.9107939","url":null,"abstract":"Virtual and augmented reality technologies are increasingly used in a wide range of applications. Such technologies employ a Head Mounted Display (HMD) that typically includes an eye-facing camera and is used for eye tracking. As some of these applications require accessing or transmitting highly sensitive private information, a trusted verification of the operator’s identity is needed. We investigate the use of HMD-setup to perform verification of operator using periocular region captured from inbuilt camera. However, the uncontrolled nature of the periocular capture within the HMD results in images with a high variation in relative eye location and eye-opening due to varied interactions. Therefore, we propose a new normalization scheme to align the ocular images and then, a new reference sample selection protocol to achieve higher verification accuracy. The applicability of our proposed scheme is exemplified using two handcrafted feature extraction methods and two deep-learning strategies. We conclude by stating the feasibility of such a verification approach despite the uncontrolled nature of the captured ocular images, especially when proper alignment and sample selection strategy is employed.","PeriodicalId":174654,"journal":{"name":"2020 8th International Workshop on Biometrics and Forensics (IWBF)","volume":"62 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114993690","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"IWBF 2020 Cover Page","authors":"","doi":"10.1109/iwbf49977.2020.9107941","DOIUrl":"https://doi.org/10.1109/iwbf49977.2020.9107941","url":null,"abstract":"","PeriodicalId":174654,"journal":{"name":"2020 8th International Workshop on Biometrics and Forensics (IWBF)","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123665732","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"IWBF 2020 Blank Page","authors":"","doi":"10.1109/iwbf49977.2020.9107959","DOIUrl":"https://doi.org/10.1109/iwbf49977.2020.9107959","url":null,"abstract":"","PeriodicalId":174654,"journal":{"name":"2020 8th International Workshop on Biometrics and Forensics (IWBF)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121897280","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
João Ribeiro Pinto, Jaime S. Cardoso, M. V. Correia
{"title":"Secure Triplet Loss for End-to-End Deep Biometrics","authors":"João Ribeiro Pinto, Jaime S. Cardoso, M. V. Correia","doi":"10.1109/IWBF49977.2020.9107958","DOIUrl":"https://doi.org/10.1109/IWBF49977.2020.9107958","url":null,"abstract":"Although deep learning is being widely adopted for every topic in pattern recognition, its use for secure and cancelable biometrics is currently reserved for feature extraction and biometric data preprocessing, limiting achievable performance. In this paper, we propose a novel formulation of the triplet loss methodology, designated as secure triplet loss, that enables biometric template cancelability with end-to-end convolutional neural networks, using easily changeable keys. Trained and evaluated for electrocardiogram-based biometrics, the network revealed easy to optimize using the modified triplet loss and achieved superior performance when compared with the state- of-the-art (10.63% equal error rate with data from 918 subjects of the UofTDB database). Additionally, it ensured biometric template security and effective template cancelability. Although further efforts are needed to avoid template linkability, the proposed secure triplet loss shows promise in template cancelability and non-invertibility for biometric recognition while taking advantage of the full power of convolutional neural networks.","PeriodicalId":174654,"journal":{"name":"2020 8th International Workshop on Biometrics and Forensics (IWBF)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129162579","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Simon Kirchgasser, L. Debiasi, R. Schraml, H. Hofbauer, A. Uhl, Jonathan N. Boyle, J. Ferryman
{"title":"Template Protection on Multiple Facial Biometrics in the Signal Domain under Visible and Near-Infrared Light","authors":"Simon Kirchgasser, L. Debiasi, R. Schraml, H. Hofbauer, A. Uhl, Jonathan N. Boyle, J. Ferryman","doi":"10.1109/IWBF49977.2020.9107964","DOIUrl":"https://doi.org/10.1109/IWBF49977.2020.9107964","url":null,"abstract":"Template protection techniques like cancellable biometrics have been introduced in order to overcome privacy issues in biometric applications. We conduct an ISO/IEC Standard 24745 compliant assessment of block re-mapping and warping focusing on recognition performance issues as well as security and unlinkability aspects. Both of these template protection schemes are applied on a multi-biometrics dataset in the signal (image) domain. The dataset includes 2D face, iris and periocular images which have been acquired not only using visual light (VIS) but also near-infrared light (NIR). With respect to the used data, this is the first study that applies and evaluates cancellable template protection methods in the signal domain on VIS/NIR 2D face, iris and periocular biometrics.","PeriodicalId":174654,"journal":{"name":"2020 8th International Workshop on Biometrics and Forensics (IWBF)","volume":"244 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131649526","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"IWBF 2020 Table of Contents","authors":"","doi":"10.1109/iwbf49977.2020.9107965","DOIUrl":"https://doi.org/10.1109/iwbf49977.2020.9107965","url":null,"abstract":"","PeriodicalId":174654,"journal":{"name":"2020 8th International Workshop on Biometrics and Forensics (IWBF)","volume":"54 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126600230","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Aida Pločo, Andrea Macarulla Rodriguez, Z. Geradts
{"title":"Spatial-Temporal Omni-Scale Feature Learning for Person Re-Identification","authors":"Aida Pločo, Andrea Macarulla Rodriguez, Z. Geradts","doi":"10.1109/IWBF49977.2020.9107966","DOIUrl":"https://doi.org/10.1109/IWBF49977.2020.9107966","url":null,"abstract":"State-of-the-art person re-identification (ReID) models use Convolutional Neural Networks (CNN) for feature extraction and comparison. Often these models fail to recognize all the intra- and inter-class variations that emerge in person ReID, making it harder to discriminate between data subjects. In this paper we seek to reduce these problems and improve performance by combining two state-of-the-art models. We use the Omni-Scale Network (OSNet) as our CNN to test the Market1501 and DukeMTMC-ReID datasets for person ReID. To fully utilize the potential of these datasets, we apply the spatialtemporal constraint which extracts the camera ID and timestamp from each image to form a distribution. We combine these two methods to create a hybrid model titled Spatial-Temporal OmniScale Network (st-OSNet). Our model attains a Rank-1 (R1) accuracy of 98.2% and mean average precision (mAP) of 92.7% for the Market1501 dataset. For the DukeMTMC-reID dataset our model achieves 94.3% R1 and 86.1% mAP, hereby surpassing the results of OSNet by a large margin for both datasets (94.3%, 86.4%, 88.4%, 76.1%, respectively).","PeriodicalId":174654,"journal":{"name":"2020 8th International Workshop on Biometrics and Forensics (IWBF)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116770660","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Philipp Terhörst, M. Tran, N. Damer, Florian Kirchbuchner, Arjan Kuijper
{"title":"Comparison-Level Mitigation of Ethnic Bias in Face Recognition","authors":"Philipp Terhörst, M. Tran, N. Damer, Florian Kirchbuchner, Arjan Kuijper","doi":"10.1109/IWBF49977.2020.9107956","DOIUrl":"https://doi.org/10.1109/IWBF49977.2020.9107956","url":null,"abstract":"Current face recognition systems achieve high performance on several benchmark tests. Despite this progress, recent works showed that these systems are strongly biased against demographic sub-groups. Previous works introduced approaches that aim at learning less biased representations. However, applying these approaches in real applications requires a complete replacement of the templates in the database. This replacement procedure further requires that a face image of each enrolled individual is stored as well. In this work, we propose the first bias-mitigating solution that works on the comparison-level of a biometric system. We propose a fairness- driven neural network classifier for the comparison of two biometric templates to replace the systems similarity function. This fair classifier is trained with a novel penalization term in the loss function to introduce the criteria of group and individual fairness to the decision process. This penalization term forces the score distributions of different ethnicities to be similar, leading to a reduction of the intra-ethnic performance differences. Experiments were conducted on two publicly available datasets and evaluated the performance of four different ethnicities. The results showed that for both fairness criteria, our proposed approach is able to significantly reduce the ethnic bias, while it preserves a high recognition ability. Our model, build on individual fairness, achieves bias reduction rate between 15.35% and 52.67%. In contrast to previous work, our solution is easy to integrate into existing systems by simply replacing the systems similarity functions with our fair template comparison approach.","PeriodicalId":174654,"journal":{"name":"2020 8th International Workshop on Biometrics and Forensics (IWBF)","volume":"142 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124260382","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Deep Learning Based Stress Prediction From Offline Signatures","authors":"Hakan Yekta Yatbaz, Meryem Erbilek","doi":"10.1109/IWBF49977.2020.9107942","DOIUrl":"https://doi.org/10.1109/IWBF49977.2020.9107942","url":null,"abstract":"Soft-Biometric measurements are now increasingly adopted as a robust means of determining individual’s non-unique characteristics with the emerging models that are widely used in the deep learning domain. This approach is clearly valuable in a variety of scenarios, specially those relating to forensics. In this study, we specifically focus on stress emotion, and propose automatic stress prediction technique from offline signature biometrics using well-known deep learning architectures such as AlexNet, ResNet and DenseNet. Due to the limited number of research that study emotion prediction from offline handwritten signatures with deep learning methods, best to our knowledge this is the first experimental study that presents empirical achievable prediction accuracy around 77%.","PeriodicalId":174654,"journal":{"name":"2020 8th International Workshop on Biometrics and Forensics (IWBF)","volume":"67 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127135267","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}