IET BiometricsPub Date : 2024-01-27DOI: 10.1049/2024/8526857
Claudio Yáñez, Juan E. Tapia, Claudio A. Perez, Christoph Busch
{"title":"Impact of Occlusion Masks on Gender Classification from Iris Texture","authors":"Claudio Yáñez, Juan E. Tapia, Claudio A. Perez, Christoph Busch","doi":"10.1049/2024/8526857","DOIUrl":"10.1049/2024/8526857","url":null,"abstract":"<div>\u0000 <p>Gender classification on normalized iris images has been previously attempted with varying degrees of success. In these previous studies, it has been shown that occlusion masks may introduce gender information; occlusion masks are used in iris recognition to remove non-iris elements. When, the goal is to classify the gender using exclusively the iris texture, the presence of gender information in the masks may result in apparently higher accuracy, thereby not reflecting the actual gender information present in the iris. However, no measures have been taken to eliminate this information while preserving as much iris information as possible. We propose a novel method to assess the gender information present in the iris more accurately by eliminating gender information in the masks. This consists of pairing iris with similar masks and different gender, generating a paired mask using the OR operator, and applying this mask to the iris. Additionally, we manually fix iris segmentation errors to study their impact on the gender classification. Our results show that occlusion masks can account for 6.92% of the gender classification accuracy on average. Therefore, works aiming to perform gender classification using the iris texture from normalized iris images should eliminate this correlation.</p>\u0000 </div>","PeriodicalId":48821,"journal":{"name":"IET Biometrics","volume":"2024 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2024-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/2024/8526857","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140492836","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
IET BiometricsPub Date : 2024-01-17DOI: 10.1049/2024/4924184
Fen Dai, Ziyang Wang, Xiangqun Zou, Rongwen Zhang, Xiaoling Deng
{"title":"Noncontact Palm Vein ROI Extraction Based on Improved Lightweight HRnet in Complex Backgrounds","authors":"Fen Dai, Ziyang Wang, Xiangqun Zou, Rongwen Zhang, Xiaoling Deng","doi":"10.1049/2024/4924184","DOIUrl":"10.1049/2024/4924184","url":null,"abstract":"<div>\u0000 <p>The extraction of ROI (region of interest) was a key step in noncontact palm vein recognition, which was crucial for the subsequent feature extraction and feature matching. A noncontact palm vein ROI extraction algorithm based on the improved HRnet for keypoints localization was proposed for dealing with hand gesture irregularities, translation, scaling, and rotation in complex backgrounds. To reduce the computation time and model size for ultimate deploying in low-cost embedded systems, this improved HRnet was designed to be lightweight by reconstructing the residual block structure and adopting depth-separable convolution, which greatly reduced the model size and improved the inference speed of network forward propagation. Next, the palm vein ROI localization and palm vein recognition are processed in self-built dataset and two public datasets (CASIA and TJU-PV). The proposed improved HRnet algorithm achieved 97.36% accuracy for keypoints detection on self-built palm vein dataset and 98.23% and 98.74% accuracy for keypoints detection on two public palm vein datasets (CASIA and TJU-PV), respectively. The model size was only 0.45 M, and on a CPU with a clock speed of 3 GHz, the average running time of ROI extraction for one image was 0.029 s. Based on the keypoints and corresponding ROI extraction, the equal error rate (EER) of palm vein recognition was 0.000362%, 0.014541%, and 0.005951% and the false nonmatch rate was 0.000001%, 11.034725%, and 4.613714% (false match rate: 0.01%) in the self-built dataset, TJU-PV, and CASIA, respectively. The experimental result showed that the proposed algorithm was feasible and effective and provided a reliable experimental basis for the research of palm vein recognition technology.</p>\u0000 </div>","PeriodicalId":48821,"journal":{"name":"IET Biometrics","volume":"2024 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2024-01-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/2024/4924184","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139526814","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
IET BiometricsPub Date : 2023-12-18DOI: 10.1049/2023/7519499
Laurenz Ruzicka, Dominik Söllinger, Bernhard Kohn, Clemens Heitzinger, Andreas Uhl, Bernhard Strobl
{"title":"Improving Sensor Interoperability between Contactless and Contact-Based Fingerprints Using Pose Correction and Unwarping","authors":"Laurenz Ruzicka, Dominik Söllinger, Bernhard Kohn, Clemens Heitzinger, Andreas Uhl, Bernhard Strobl","doi":"10.1049/2023/7519499","DOIUrl":"10.1049/2023/7519499","url":null,"abstract":"<div>\u0000 <p>Current fingerprint identification systems face significant challenges in achieving interoperability between contact-based and contactless fingerprint sensors. In contrast to existing literature, we propose a novel approach that can combine pose correction with further enhancement operations. It uses deep learning models to steer the correction of the viewing angle, therefore enhancing the matching features of contactless fingerprints. The proposed approach was tested on real data of 78 participants (37,162 contactless fingerprints) acquired by national police officers using both contact-based and contactless sensors. The study found that the effectiveness of pose correction and unwarping varied significantly based on the individual characteristics of each fingerprint. However, when the various extension methods were combined on a finger-wise basis, an average decrease of 36.9% in equal error rates (EERs) was observed. Additionally, the combined impact of pose correction and bidirectional unwarping led to an average increase of 3.72% in NFIQ 2 scores across all fingers, coupled with a 6.4% decrease in EERs relative to the baseline. The addition of deep learning techniques presents a promising approach for achieving high-quality fingerprint acquisition using contactless sensors, enhancing recognition accuracy in various domains.</p>\u0000 </div>","PeriodicalId":48821,"journal":{"name":"IET Biometrics","volume":"2023 1","pages":""},"PeriodicalIF":1.8,"publicationDate":"2023-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/2023/7519499","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139175263","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
IET BiometricsPub Date : 2023-12-06DOI: 10.1049/2023/6636386
Jingwen Li, Jiuzhen Liang, Hao Liu, Zhenjie Hou
{"title":"Adaptive Weighted Face Alignment by Multi-Scale Feature and Offset Prediction","authors":"Jingwen Li, Jiuzhen Liang, Hao Liu, Zhenjie Hou","doi":"10.1049/2023/6636386","DOIUrl":"10.1049/2023/6636386","url":null,"abstract":"<div>\u0000 <p>Traditional heatmap regression methods have some problems such as the lower limit of theoretical error and the lack of global constraints, which may lead to the collapse of the results in practical application. In this paper, we develop a facial landmark detection model aided by offset prediction to constrain the global shape. First, the hybrid detection model is used to roughly locate the initial coordinates predicted by the backbone network. At the same time, the head rotation attitude prediction module is added to the backbone network, and the Euler angle is used as the adaptive weight to modify the loss function so that the model has better robustness to the large pose image. Then, we introduce an offset prediction network. It uses the heatmap corresponding to the initial coordinates as an attention mask to fuze with the features, so the network can focus on the area around landmarks. This model shares the global features and regresses the offset relative to the real coordinates based on the initial coordinates to further enhance the continuity. In addition, we also add a multi-scale feature pre-extraction module to preprocess features so that we can increase feature scales and receptive fields. Experiments on several challenging public datasets show that our method gets better performance than the existing detection methods, confirming the effectiveness of our method.</p>\u0000 </div>","PeriodicalId":48821,"journal":{"name":"IET Biometrics","volume":"2023 1","pages":""},"PeriodicalIF":1.8,"publicationDate":"2023-12-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/2023/6636386","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138596728","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
IET BiometricsPub Date : 2023-11-14DOI: 10.1049/2023/5087083
Sameera Khan, Dileep Kumar Singh, Mahesh Singh, Desta Faltaso Mena
{"title":"Automatic Signature Verifier Using Gaussian Gated Recurrent Unit Neural Network","authors":"Sameera Khan, Dileep Kumar Singh, Mahesh Singh, Desta Faltaso Mena","doi":"10.1049/2023/5087083","DOIUrl":"10.1049/2023/5087083","url":null,"abstract":"<div>\u0000 <p>Handwritten signatures are one of the most extensively utilized biometrics used for authentication, and forgeries of this behavioral biometric are quite widespread. Biometric databases are also difficult to access for training purposes due to privacy issues. The efficiency of automated authentication systems has been severely harmed as a result of this. Verification of static handwritten signatures with high efficiency remains an open research problem to date. This paper proposes an innovative introselect median filter for preprocessing and a novel Gaussian gated recurrent unit neural network (2GRUNN) as a classifier for designing an automatic verifier for handwritten signatures. The proposed classifier has achieved an FPR of 1.82 and an FNR of 3.03. The efficacy of the proposed method has been compared with the various existing neural network-based verifiers.</p>\u0000 </div>","PeriodicalId":48821,"journal":{"name":"IET Biometrics","volume":"2023 1","pages":""},"PeriodicalIF":1.8,"publicationDate":"2023-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/2023/5087083","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134957429","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
IET BiometricsPub Date : 2023-11-10DOI: 10.1049/2023/9353816
U. M. Kelly, M. Nauta, L. Liu, L. J. Spreeuwers, R. N. J. Veldhuis
{"title":"Worst-Case Morphs Using Wasserstein ALI and Improved MIPGAN","authors":"U. M. Kelly, M. Nauta, L. Liu, L. J. Spreeuwers, R. N. J. Veldhuis","doi":"10.1049/2023/9353816","DOIUrl":"10.1049/2023/9353816","url":null,"abstract":"<div>\u0000 <p>A morph is a combination of two separate facial images and contains the identity information of two different people. When used in an identity document, both people can be authenticated by a biometric face recognition (FR) system. Morphs can be generated using either a landmark-based approach or approaches based on deep learning, such as generative adversarial networks (GANs). In a recent paper, we introduced a <i>worst-case</i> upper bound on how challenging morphing attacks can be for an FR system. The closer morphs are to this upper bound, the bigger the challenge they pose to FR. We introduced an approach with which it was possible to generate morphs that approximate this upper bound for a known FR system (white box) but not for unknown (black box) FR systems. In this paper, we introduce a morph generation method that can approximate worst-case morphs even when the FR system is not known. A key contribution is that we include the goal of generating difficult morphs <i>during</i> training. Our method is based on adversarially learned inference (ALI) and uses concepts from Wasserstein GANs trained with gradient penalty, which were introduced to stabilise the training of GANs. We include these concepts to achieve a similar improvement in training stability and call the resulting method Wasserstein ALI (WALI). We finetune WALI using loss functions designed specifically to improve the ability to manipulate identity information in facial images and show how it can generate morphs that are more challenging for FR systems than landmark- or GAN-based morphs. We also show how our findings can be used to improve MIPGAN, an existing StyleGAN-based morph generator.</p>\u0000 </div>","PeriodicalId":48821,"journal":{"name":"IET Biometrics","volume":"2023 1","pages":""},"PeriodicalIF":1.8,"publicationDate":"2023-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/2023/9353816","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135091763","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
IET BiometricsPub Date : 2023-10-25DOI: 10.1049/2023/9253739
Lizhen Zhou, Lu Yang, Deqian Fu, Gongping Yang
{"title":"Encoding Coefficient Similarity-Based Multifeature Sparse Representation for Finger Vein Recognition","authors":"Lizhen Zhou, Lu Yang, Deqian Fu, Gongping Yang","doi":"10.1049/2023/9253739","DOIUrl":"10.1049/2023/9253739","url":null,"abstract":"<div>\u0000 <p>Finger vein recognition is a promising biometric technology that has received significant research attention. However, most of the existing works often relied on a single feature, which failed to fully exploit the discriminative information in finger vein images, and therefore led to a limited recognition performance. To overcome this limitation, this paper proposes an encoding coefficient similarity-based multifeature sparse representation method for finger vein recognition. The proposed method not only uses multiple features to extract comprehensive information from finger vein images, but also obtains more discriminative information through constraints in the objective function. The sparsity constraint retains the key information of each feature, and the similarity constraint explores the shared information among the features. Furthermore, the proposed method is capable of fusing all kinds of features, not limited to specific ones. The optimization problem of the proposed method is efficiently solved using the alternating direction multiplier method algorithm. Experimental results on two public finger vein databases HKPU-FV and SDU-FV show that the proposed method achieves good recognition performance.</p>\u0000 </div>","PeriodicalId":48821,"journal":{"name":"IET Biometrics","volume":"2023 1","pages":""},"PeriodicalIF":1.8,"publicationDate":"2023-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/2023/9253739","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135218814","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
IET BiometricsPub Date : 2023-07-26DOI: 10.1049/bme2.12111
Emilio Mordini
{"title":"Biometric privacy protection: What is this thing called privacy?","authors":"Emilio Mordini","doi":"10.1049/bme2.12111","DOIUrl":"https://doi.org/10.1049/bme2.12111","url":null,"abstract":"<p>We are at the wake of an epochal revolution, the Information Revolution. The Information Revolution has been accompanied by the rise of a new commodity, digital data, which is changing the world including methods for human recognition. Biometric systems are the recognition technology of the new age. So, privacy scholars tend to frame biometric privacy protection chiefly in terms of biometric data protection. The author argues that this is a misleading perspective. Biometric data protection is an extremely relevant legal and commercial issue but has little to do with privacy. The notion of privacy, understood as a personal intimate sphere, is hardly related to what is contained in this private realm (data or whatever else), rather it is related to the very existence of a secluded space. Privacy relies on having the possibility to hide rather than in hiding anything. What really matters is the existence of a private sphere rather than what is inside. This also holds true for biometric privacy. Biometric privacy protection should focus on bodily and psychological integrity, preventing those technology conditions and operating practices that may lead to turn biometric recognition into a humiliating experience for the individual.</p>","PeriodicalId":48821,"journal":{"name":"IET Biometrics","volume":"12 4","pages":"183-193"},"PeriodicalIF":2.0,"publicationDate":"2023-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/bme2.12111","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"50154581","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Deep features fusion for user authentication based on human activity","authors":"Yris Brice Wandji Piugie, Christophe Charrier, Joël Di Manno, Christophe Rosenberger","doi":"10.1049/bme2.12115","DOIUrl":"https://doi.org/10.1049/bme2.12115","url":null,"abstract":"<p>The exponential growth in the use of smartphones means that users must constantly be concerned about the security and privacy of mobile data because the loss of a mobile device could compromise personal information. To address this issue, continuous authentication systems have been proposed, in which users are monitored transparently after initial access to the smartphone. In this study, the authors address the problem of user authentication by considering human activities as behavioural biometric information. The authors convert the behavioural biometric data (considered as time series) into a 2D colour image. This transformation process keeps all the characteristics of the behavioural signal. Time series does not receive any filtering operation with this transformation, and the method is reversible. This signal-to-image transformation allows us to use the 2D convolutional networks to build efficient deep feature vectors. This allows them to compare these feature vectors to the reference template vectors to compute the performance metric. The authors evaluate the performance of the authentication system in terms of Equal Error Rate on a benchmark University of Californy, Irvine Human Activity Recognition dataset, and they show the efficiency of the approach.</p>","PeriodicalId":48821,"journal":{"name":"IET Biometrics","volume":"12 4","pages":"222-234"},"PeriodicalIF":2.0,"publicationDate":"2023-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/bme2.12115","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"50154596","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
IET BiometricsPub Date : 2023-07-24DOI: 10.1049/bme2.12117
Md Mahedi Hasan, Nasser Nasrabadi, Jeremy Dawson
{"title":"On improving interoperability for cross-domain multi-finger fingerprint matching using coupled adversarial learning","authors":"Md Mahedi Hasan, Nasser Nasrabadi, Jeremy Dawson","doi":"10.1049/bme2.12117","DOIUrl":"https://doi.org/10.1049/bme2.12117","url":null,"abstract":"<p>Improving interoperability in contactless-to-contact fingerprint matching is a crucial factor for the mainstream adoption of contactless fingerphoto devices. However, matching contactless probe images against legacy contact-based gallery images is very challenging due to the presence of heterogeneity between these domains. Moreover, unconstrained acquisition of fingerphotos produces perspective distortion. Therefore, direct matching of fingerprint features suffers severe performance degradation on cross-domain interoperability. In this study, to address this issue, the authors propose a coupled adversarial learning framework to learn a fingerprint representation in a low-dimensional subspace that is discriminative and domain-invariant in nature. In fact, using a conditional coupled generative adversarial network, the authors project both the contactless and the contact-based fingerprint into a latent subspace to explore the hidden relationship between them using class-specific contrastive loss and ArcFace loss. The ArcFace loss ensures intra-class compactness and inter-class separability, whereas the contrastive loss minimises the distance between the subspaces for the same finger. Experiments on four challenging datasets demonstrate that our proposed model outperforms state-of-the methods and two top-performing commercial-off-the-shelf SDKs, that is, Verifinger v12.0 and Innovatrics. In addition, the authors also introduce a multi-finger score fusion network that significantly boosts interoperability by effectively utilising the multi-finger input of the same subject for both cross-domain and cross-sensor settings.</p>","PeriodicalId":48821,"journal":{"name":"IET Biometrics","volume":"12 4","pages":"194-210"},"PeriodicalIF":2.0,"publicationDate":"2023-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/bme2.12117","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"50142814","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}