{"title":"A Deep CNN-Based Feature Extraction and Matching of Pores for Fingerprint Recognition","authors":"Mohammed Ali;Chunyan Wang;M. Omair Ahmad","doi":"10.1109/TBIOM.2024.3516634","DOIUrl":"https://doi.org/10.1109/TBIOM.2024.3516634","url":null,"abstract":"The inherent characteristics of fingerprint pores, including their immutability, permanence, and uniqueness in terms of size, shape, and position along ridges, make them suitable candidates for fingerprint recognition. In contrast to only a limited number of other landmarks in a fingerprint, such as minutia, the presence of a large number of pores even in a small fingerprint segment is a very attractive characteristic of pores for fingerprint recognition. A pore-based fingerprint recognition system has two main modules: a pore detection module and a pore feature extraction and matching module. The focus of this paper is on the latter module, in which the features of the detected pores in a query fingerprint are extracted, uniquely represented and then used for matching these pores with those in a template fingerprint. Fingerprint recognition systems that use convolutional neural networks (CNNs) in the design of this module have automatic feature extraction capabilities. However, CNNs used in these modules have inadequate capability of capturing deep-level features. Moreover, the pore matching part of these modules heavily relies only on the Euclidean distance metric, which if used alone, may not provide an accurate measure of similarity between the pores. In this paper, a novel pore feature extraction and matching module is presented in which a CNN architecture is proposed to generate highly representational and discriminative hierarchical features and a balance between the performance and complexity is achieved by using depthwise and depthwise separable convolutions. Furthermore, an accurate composite metric, encompassing the Euclidean distance, angle, and magnitudes difference between the vectors of pore representations, is introduced to measure the similarity between the pores of the query and template fingerprint images. Extensive experimentation is carried out to demonstrate the effectiveness of the proposed scheme in terms of performance and complexity, and its superiority over the existing state-of-the-art pore-based fingerprint recognition systems.","PeriodicalId":73307,"journal":{"name":"IEEE transactions on biometrics, behavior, and identity science","volume":"7 3","pages":"368-383"},"PeriodicalIF":0.0,"publicationDate":"2024-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144492397","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Privacy-Preserving Face Recognition and Verification With Lensless Camera","authors":"Chris Henry;M. Salman Asif;Zhu Li","doi":"10.1109/TBIOM.2024.3515144","DOIUrl":"https://doi.org/10.1109/TBIOM.2024.3515144","url":null,"abstract":"Facial recognition technology is becoming increasingly ubiquitous nowadays. Facial recognition systems rely upon large amounts of facial image data. This raises serious privacy concerns since storing this facial data securely is challenging given the constant risk of data breaches or hacking. This paper proposes a privacy-preserving face recognition and verification system that works without compromising the user’s privacy. It utilizes sensor measurements captured by a lensless camera - FlatCam. These sensor measurements are visually unintelligible, preserving the user’s privacy. Our solution works without the knowledge of the camera sensor’s Point Spread Function and does not require image reconstruction at any stage. In order to perform face recognition without information on face images, we propose a Discrete Cosine Transform (DCT) domain sensor measurement learning scheme that can recognize faces without revealing face images. We compute a frequency domain representation by computing the DCT of the sensor measurement at multiple resolutions and then splitting the result into multiple subbands. The network trained using this DCT representation results in huge accuracy gains compared to the accuracy obtained after directly training with sensor measurement. In addition, we further enhance the security of the system by introducing pseudo-random noise at random DCT coefficient locations as a secret key in the proposed DCT representation. It is virtually impossible to recover the face images from the DCT representation without the knowledge of the camera parameters and the noise locations. We evaluated the proposed system on a real lensless camera dataset - the FlatCam Face dataset. Experimental results demonstrate the system is highly secure and can achieve a recognition accuracy of 93.97% while maintaining strong user privacy.","PeriodicalId":73307,"journal":{"name":"IEEE transactions on biometrics, behavior, and identity science","volume":"7 3","pages":"354-367"},"PeriodicalIF":0.0,"publicationDate":"2024-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144492435","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"2024 Index IEEE Transactions on Biometrics, Behavior, and Identity Science Vol. 6","authors":"","doi":"10.1109/TBIOM.2024.3511314","DOIUrl":"https://doi.org/10.1109/TBIOM.2024.3511314","url":null,"abstract":"","PeriodicalId":73307,"journal":{"name":"IEEE transactions on biometrics, behavior, and identity science","volume":"6 4","pages":"613-623"},"PeriodicalIF":0.0,"publicationDate":"2024-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10779200","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142777625","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. G. Sarwar Murshed;Keivan Bahmani;Stephanie Schuckers;Faraz Hussain
{"title":"Deep Age-Invariant Fingerprint Segmentation System","authors":"M. G. Sarwar Murshed;Keivan Bahmani;Stephanie Schuckers;Faraz Hussain","doi":"10.1109/TBIOM.2024.3506926","DOIUrl":"https://doi.org/10.1109/TBIOM.2024.3506926","url":null,"abstract":"Fingerprints are an important biometric modality used in various applications, including border crossings, healthcare systems, criminal justice, electronic voting, and more. Fingerprint-based identification systems attain higher accuracy when utilizing a slap fingerprint image containing multiple fingerprints of a subject, as opposed to using a single fingerprint. However, segmenting or auto-localizing the fingerprints in a slap image is a challenging task due to factors such as the different orientations of fingerprints, noisy backgrounds, and the smaller size of fingertip components. Real-world slap image datasets often contain rotated fingerprints, making it challenging for biometric recognition systems to automatically localize and label them accurately. Errors in fingerprint localization and finger labeling lead to poor matching performance. In this paper, we introduce a deep learning-based method for generating arbitrarily angled bounding boxes to precisely localize and label fingerprints in both axis-aligned and over-rotated slap images. We present CRFSEG (Clarkson Rotated Fingerprint Segmentation Model), an improvement upon the Faster R-CNN algorithm, incorporating arbitrarily-angled bounding boxes for enhanced performance on challenging slap images. CRFSEG demonstrates consistent results across different age groups and effectively handles over-rotated slap images. We evaluated CRFSEG against the widely used slap segmentation systems NFSEG and VeriFinger. Additionally, we leveraged a transformer-based vision architecture to build TransSEG (Transformer-based Slap Segmentation System), a new model for further comparison of CRFSEG with state-of-the-art deep learning-based image segmentation models. In our dataset containing both normal and rotated images of adult and children subjects, CRFSEG achieved a matching accuracy of 97.17%, which outperformed TransSEG (94.96%), VeriFinger (94.25%) and NFSEG segmentation systems (80.58%). The results indicate that our novel deep learning-based slap segmentation system is more efficient for both children and adult slaps. The code for building the CRFSEG and TransSEG model is publicly available at (<uri>https://github.com/sarwarmurshed/CRFSEG</uri>).","PeriodicalId":73307,"journal":{"name":"IEEE transactions on biometrics, behavior, and identity science","volume":"7 3","pages":"313-330"},"PeriodicalIF":0.0,"publicationDate":"2024-11-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144492495","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"IEEE Transactions on Biometrics, Behavior, and Identity Science","authors":"","doi":"10.1109/TBIOM.2024.3459103","DOIUrl":"https://doi.org/10.1109/TBIOM.2024.3459103","url":null,"abstract":"","PeriodicalId":73307,"journal":{"name":"IEEE transactions on biometrics, behavior, and identity science","volume":"6 4","pages":"C2-C2"},"PeriodicalIF":0.0,"publicationDate":"2024-11-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10767123","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142713792","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Greetings From the New Editor-in-Chief","authors":"Mark S. Nixon","doi":"10.1109/TBIOM.2024.3458600","DOIUrl":"https://doi.org/10.1109/TBIOM.2024.3458600","url":null,"abstract":"","PeriodicalId":73307,"journal":{"name":"IEEE transactions on biometrics, behavior, and identity science","volume":"6 4","pages":"435-436"},"PeriodicalIF":0.0,"publicationDate":"2024-11-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10767128","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142713951","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"IEEE Transactions on Biometrics, Behavior, and Identity Science Information for Authors","authors":"","doi":"10.1109/TBIOM.2024.3459104","DOIUrl":"https://doi.org/10.1109/TBIOM.2024.3459104","url":null,"abstract":"","PeriodicalId":73307,"journal":{"name":"IEEE transactions on biometrics, behavior, and identity science","volume":"6 4","pages":"C3-C3"},"PeriodicalIF":0.0,"publicationDate":"2024-11-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10767129","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142713866","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"VerifNet - A Novel Score Fusion-Based Method Leveraging Wavelets With Deep Learning and Minutiae Matching for Contactless Fingerprint Recognition","authors":"Guruprasad Parasnis;Rajas Bhope;Anmol Chokshi;Vansh Jain;Archishman Biswas;Deekshant Kumar;Saket Pateriya;Vijay Anand;Vivek Kanhangad;Vikram M. Gadre","doi":"10.1109/TBIOM.2024.3504281","DOIUrl":"https://doi.org/10.1109/TBIOM.2024.3504281","url":null,"abstract":"This paper introduces a novel approach to a complete contactless biometric system that takes a finger photo image as an input and performs various image processing techniques and authenticates the fingerprints for an easy, non-invasive system that is efficient and robust. While contact-based fingerprint recognition systems have produced ground-breaking achievements, these systems face issues with latent fingerprints, sensor degradation brought on by frequent physical touch, and hygiene issues. Thus, the next step towards solving these issues is developing a contactless system that counters all the mentioned issues as faced by a contact-based fingerprint recognition system. This paper introduces a novel deep learning architecture that fuses the scattering wavelet transform making it lightweight and computationally efficient. A unique combination of the Siamese network integrated with wavelets and the traditional minutiae-based approach builds the core framework for the recognition system. The combination of these techniques allows the system to perform fingerprint recognition with high accuracy. This approach performs with an Equal Error Rate (EER) of 2.5% on the IITI-CFD, 2.5% on the PolyU 2D Contactless Dataset, and 3.76% on the IITB Touchless Fingerprint Dataset. Through, this paper, we aim to develop a biometric system that achieves a balance between economy and efficiency.","PeriodicalId":73307,"journal":{"name":"IEEE transactions on biometrics, behavior, and identity science","volume":"7 3","pages":"344-353"},"PeriodicalIF":0.0,"publicationDate":"2024-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144492494","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}