John Daugman;Cathryn Downing;Oluwatobi Noah Akande;Oluwakemi Christiana Abikoye
{"title":"Ethnicity and Biometric Uniqueness: Iris Pattern Individuality in a West African Database","authors":"John Daugman;Cathryn Downing;Oluwatobi Noah Akande;Oluwakemi Christiana Abikoye","doi":"10.1109/TBIOM.2023.3327287","DOIUrl":"10.1109/TBIOM.2023.3327287","url":null,"abstract":"We conducted more than 1.3 million comparisons of iris patterns encoded from images collected at two Nigerian universities, which constitute the newly available African Human Iris (AFHIRIS) database. The purpose was to discover whether ethnic differences in iris structure and appearance such as the textural feature size, as contrasted with an all-Chinese image database or an American database in which only 1.53% were of African-American heritage, made a material difference for iris discrimination. We measured a reduction in entropy for the AFHIRIS database due to the coarser iris features created by the thick anterior layer of melanocytes, and we found stochastic parameters that accurately model the relevant empirical distributions. Quantile-Quantile analysis revealed that a very small change in operational decision thresholds for the African database would compensate for the reduced entropy and generate the same performance in terms of resistance to False Matches. We conclude that despite demographic difference, individuality can be robustly discerned by comparison of iris patterns in this West African population.","PeriodicalId":73307,"journal":{"name":"IEEE transactions on biometrics, behavior, and identity science","volume":"6 1","pages":"79-86"},"PeriodicalIF":0.0,"publicationDate":"2023-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134981086","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Cancelable Face Recognition Using Deep Steganography","authors":"Koichi Ito;Takashi Kozu;Hiroya Kawai;Goki Hanawa;Takafumi Aoki","doi":"10.1109/TBIOM.2023.3327694","DOIUrl":"10.1109/TBIOM.2023.3327694","url":null,"abstract":"In biometrics, the secure transfer and storage of biometric samples are important for protecting the privacy and security of the data subject. One of the methods for authentication while protecting biometric samples is cancelable biometrics, which performs transformation of features and uses the transformed features for authentication. Among the methods of cancelable biometrics, steganography-based approaches have been proposed, in which secret information is embedded in another to hide its existence. In this paper, we propose cancelable biometrics based on deep steganography for face recognition. We embed a face image or its face features into a cover image to generate a stego image with the same appearance as the cover image. By using a dedicated face feature extractor, we can perform face recognition without restoring the embedded face image or face features from the stego image. We demonstrate the effectiveness of the proposed method compared to conventional steganography-based methods through performance and security evaluation experiments using public face image datasets. In addition, we present one of the potential applications of the proposed method to improve the security of face recognition by using a QR code with a one-time password for the cover image.","PeriodicalId":73307,"journal":{"name":"IEEE transactions on biometrics, behavior, and identity science","volume":"6 1","pages":"87-102"},"PeriodicalIF":0.0,"publicationDate":"2023-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10296007","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134980588","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"3-D Face Morphing Attacks: Generation, Vulnerability and Detection","authors":"Jag Mohan Singh;Raghavendra Ramachandra","doi":"10.1109/TBIOM.2023.3324684","DOIUrl":"10.1109/TBIOM.2023.3324684","url":null,"abstract":"Face Recognition systems (FRS) have been found to be vulnerable to morphing attacks, where the morphed face image is generated by blending the face images from contributory data subjects. This work presents a novel direction for generating face-morphing attacks in 3D. To this extent, we introduced a novel approach based on blending 3D face point clouds corresponding to contributory data subjects. The proposed method generates 3D face morphing by projecting the input 3D face point clouds onto depth maps and 2D color images, followed by image blending and wrapping operations performed independently on the color images and depth maps. We then back-projected the 2D morphing color map and the depth map to the point cloud using the canonical (fixed) view. Given that the generated 3D face morphing models will result in holes owing to a single canonical view, we have proposed a new algorithm for hole filling that will result in a high-quality 3D face morphing model. Extensive experiments were conducted on the newly generated 3D face dataset comprising 675 3D scans corresponding to 41 unique data subjects and a publicly available database (Facescape) with 100 data subjects. Experiments were performed to benchmark the vulnerability of the proposed 3D morph-generation scheme against automatic 2D, 3D FRS, and human observer analysis. We also presented a quantitative assessment of the quality of the generated 3D face-morphing models using eight different quality metrics. Finally, we propose three different 3D face Morphing Attack Detection (3D-MAD) algorithms to benchmark the performance of 3D face morphing attack detection techniques.","PeriodicalId":73307,"journal":{"name":"IEEE transactions on biometrics, behavior, and identity science","volume":"6 1","pages":"103-117"},"PeriodicalIF":0.0,"publicationDate":"2023-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10286232","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136374230","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"2023 Index IEEE Transactions on Biometrics, Behavior, and Identity Science Vol. 5","authors":"","doi":"10.1109/TBIOM.2023.3323413","DOIUrl":"https://doi.org/10.1109/TBIOM.2023.3323413","url":null,"abstract":"","PeriodicalId":73307,"journal":{"name":"IEEE transactions on biometrics, behavior, and identity science","volume":"5 4","pages":"606-615"},"PeriodicalIF":0.0,"publicationDate":"2023-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/iel7/8423754/10273758/10278523.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49989177","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"IEEE Transactions on Biometrics, Behavior, and Identity Science Publication Information","authors":"","doi":"10.1109/TBIOM.2023.3311478","DOIUrl":"https://doi.org/10.1109/TBIOM.2023.3311478","url":null,"abstract":"","PeriodicalId":73307,"journal":{"name":"IEEE transactions on biometrics, behavior, and identity science","volume":"5 4","pages":"C2-C2"},"PeriodicalIF":0.0,"publicationDate":"2023-10-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/iel7/8423754/10273758/10273760.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49989201","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"IEEE Transactions on Biometrics, Behavior, and Identity Science Information for Authors","authors":"","doi":"10.1109/TBIOM.2023.3311406","DOIUrl":"https://doi.org/10.1109/TBIOM.2023.3311406","url":null,"abstract":"","PeriodicalId":73307,"journal":{"name":"IEEE transactions on biometrics, behavior, and identity science","volume":"5 4","pages":"C3-C3"},"PeriodicalIF":0.0,"publicationDate":"2023-10-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/iel7/8423754/10273758/10273704.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49963983","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Thrupthi Ann John;Vineeth N. Balasubramanian;C. V. Jawahar
{"title":"Explaining Deep Face Algorithms Through Visualization: A Survey","authors":"Thrupthi Ann John;Vineeth N. Balasubramanian;C. V. Jawahar","doi":"10.1109/TBIOM.2023.3319837","DOIUrl":"10.1109/TBIOM.2023.3319837","url":null,"abstract":"Although current deep models for face tasks surpass human performance on some benchmarks, we do not understand how they work. Thus, we cannot predict how it will react to novel inputs, resulting in catastrophic failures and unwanted biases in the algorithms. Explainable AI helps bridge the gap, but currently, there are very few visualization algorithms designed for faces. This work undertakes a first-of-its-kind meta-analysis of explainability algorithms in the face domain. We explore the nuances and caveats of adapting general-purpose visualization algorithms to the face domain, illustrated by computing visualizations on popular face models. We review existing face explainability works and reveal valuable insights into the structure and hierarchy of face networks. We also determine the design considerations for practical face visualizations accessible to AI practitioners by conducting a user study on the utility of various explainability algorithms.","PeriodicalId":73307,"journal":{"name":"IEEE transactions on biometrics, behavior, and identity science","volume":"6 1","pages":"15-29"},"PeriodicalIF":0.0,"publicationDate":"2023-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135794552","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Dynamic Residual Distillation Network for Face Anti-Spoofing With Feature Attention Learning","authors":"Yan He;Fei Peng;Min Long","doi":"10.1109/TBIOM.2023.3312128","DOIUrl":"https://doi.org/10.1109/TBIOM.2023.3312128","url":null,"abstract":"Currently, most face anti-spoofing methods target the generalization problem by relying on auxiliary information such as additional annotations and modalities. However, this auxiliary information is unavailable in practical scenarios, which potentially hinders the application of these methods. Meanwhile, the predetermined or fixed characteristics limit their generalization capability. To countermeasure these problems, a dynamic residual distillation network with feature attention learning (DRDN) is developed to adaptively search discriminative representation and embedding space without accessing any auxiliary information. Specifically, a pixel-level residual distillation module is first designed to obtain domain-irrelevant liveness representation by suppressing both the high-level semantic and low-frequency illumination factors, thus the domain divergence between the source and target domains can be adaptively mitigated. Secondly, a feature-level attention contrastive learning is proposed to construct a distance-aware asymmetrical embedding space to avoid the class boundary over-fitting. Finally, an attention enhancement backbone incorporated with attention blocks is designed for automatically capturing important regions and channels in feature extraction. Experimental results and analysis demonstrate that the proposed method outperforms the state-of-the-art anti-spoofing methods in both single-source and multi-source domain generalization scenarios.","PeriodicalId":73307,"journal":{"name":"IEEE transactions on biometrics, behavior, and identity science","volume":"5 4","pages":"579-592"},"PeriodicalIF":0.0,"publicationDate":"2023-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49989175","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"AFR-Net: Attention-Driven Fingerprint Recognition Network","authors":"Steven A. Grosz;Anil K. Jain","doi":"10.1109/TBIOM.2023.3317303","DOIUrl":"10.1109/TBIOM.2023.3317303","url":null,"abstract":"The use of vision transformers (ViT) in computer vision is increasing due to its limited inductive biases (e.g., locality, weight sharing, etc.) and increased scalability compared to other deep learning models. This has led to some initial studies on the use of ViT for biometric recognition, including fingerprint recognition. In this work, we improve on these initial studies by i.) evaluating additional attention-based architectures, ii.) scaling to larger and more diverse training and evaluation datasets, and iii.) combining the complimentary representations of attention-based and CNN-based embeddings for improved state-of-the-art (SOTA) fingerprint recognition (both authentication and identification). Our combined architecture, AFR-Net (Attention-Driven Fingerprint Recognition Network), outperforms several baseline models, including a SOTA commercial fingerprint system by Neurotechnology, Verifinger v12.3, across intra-sensor, cross-sensor, and latent to rolled fingerprint matching datasets. Additionally, we propose a realignment strategy using local embeddings extracted from intermediate feature maps within the networks to refine the global embeddings in low certainty situations, which boosts the overall recognition accuracy significantly. This realignment strategy requires no additional training and can be applied as a wrapper to any existing deep learning network (including attention-based, CNN-based, or both) to boost its performance in a variety of computer vision tasks.","PeriodicalId":73307,"journal":{"name":"IEEE transactions on biometrics, behavior, and identity science","volume":"6 1","pages":"30-42"},"PeriodicalIF":0.0,"publicationDate":"2023-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135555656","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Progressive Direction-Aware Pose Grammar for Human Pose Estimation","authors":"Lu Zhou;Yingying Chen;Jinqiao Wang","doi":"10.1109/TBIOM.2023.3315509","DOIUrl":"https://doi.org/10.1109/TBIOM.2023.3315509","url":null,"abstract":"Human pose estimation is challenged by lots of factors such as complex articulation, occlusion and so on. Generally, message passing among human joints plays an important role in rectifying the wrong detection caused by referred challenges. In this paper, we propose a progressive direction-aware pose grammar model which performs message passing by building the pose grammar in a novel fashion. Firstly, a multi-scale Bi-C3D pose grammar module is proposed to promote message passing among human joints within a local range. We propose to conduct message passing by means of 3D convolution (C3D) which proves to be more effective compared with other sequential modeling techniques. To facilitate the message passing, we devise a novel adaptive direction guidance module where explicit direction information is embedded. Besides, we propose to fuse final results with attention maps to make full use of the bidirectional information and the fusion can be regarded as an ensemble process. Secondly, a more economic global regional grammar is introduced to build the relationships among human joints globally. The local-to-global modeling scheme promotes the message passing in a progressive manner and boosts the performance by a large margin. Promising results are achieved on MPII, LSP and COCO benchmarks.","PeriodicalId":73307,"journal":{"name":"IEEE transactions on biometrics, behavior, and identity science","volume":"5 4","pages":"593-605"},"PeriodicalIF":0.0,"publicationDate":"2023-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49989176","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}