{"title":"Fingerprint Pore Detection: A Survey","authors":"Azim Ibragimov;Mauricio Pamplona Segundo","doi":"10.1109/TBIOM.2025.3560655","DOIUrl":"https://doi.org/10.1109/TBIOM.2025.3560655","url":null,"abstract":"Fingerprint recognition research based on Level 3 features – especially sweat pores – has got increasing interest thanks to its ability to operate under daunting conditions, such as matching latent and partial prints. In this work, we review methods, datasets, and training and evaluation protocols for pore detection intended for obtaining such features. We have observed many inconsistencies in training and evaluation protocols, data withholding, and lack of public source code have hampered reproducibility and comparisons in the literature. We aim to address these challenges by looking into the most promising insights from existing works to establish best practices and introduce a more reasonable starting point for future research. To do so, we create a baseline pore detector and reimplement three others for comparison purposes. We carried out our experiments using the most popular dataset – PolyU-HRF – and two recent publicly available datasets – L3-SF and IITI-HRF. Our results show a reproducible path for researchers and highlight that there is still a wide margin for innovation and improvement in this area. An open repository containing the source code for our self-implemented detectors and the protocols employed in our experimental evaluation is available in: <uri>https://github.com/azimIbragimov/Fingerprint-Pore-Detection-A-Survey</uri>","PeriodicalId":73307,"journal":{"name":"IEEE transactions on biometrics, behavior, and identity science","volume":"7 4","pages":"848-861"},"PeriodicalIF":5.0,"publicationDate":"2025-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145134934","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"IEEE Transactions on Biometrics, Behavior, and Identity Science Publication Information","authors":"","doi":"10.1109/TBIOM.2025.3548256","DOIUrl":"https://doi.org/10.1109/TBIOM.2025.3548256","url":null,"abstract":"","PeriodicalId":73307,"journal":{"name":"IEEE transactions on biometrics, behavior, and identity science","volume":"7 2","pages":"C2-C2"},"PeriodicalIF":0.0,"publicationDate":"2025-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10938747","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143698182","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"IEEE Transactions on Biometrics, Behavior, and Identity Science Information for Authors","authors":"","doi":"10.1109/TBIOM.2025.3548257","DOIUrl":"https://doi.org/10.1109/TBIOM.2025.3548257","url":null,"abstract":"","PeriodicalId":73307,"journal":{"name":"IEEE transactions on biometrics, behavior, and identity science","volume":"7 2","pages":"C3-C3"},"PeriodicalIF":0.0,"publicationDate":"2025-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10938740","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143698324","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Guanyu Hu;Dimitrios Kollias;Eleni Papadopoulou;Paraskevi Tzouveli;Jie Wei;Xinyu Yang
{"title":"Rethinking Affect Analysis: A Protocol for Ensuring Fairness and Consistency","authors":"Guanyu Hu;Dimitrios Kollias;Eleni Papadopoulou;Paraskevi Tzouveli;Jie Wei;Xinyu Yang","doi":"10.1109/TBIOM.2025.3550000","DOIUrl":"https://doi.org/10.1109/TBIOM.2025.3550000","url":null,"abstract":"Evaluating affect analysis methods presents challenges due to inconsistencies in database partitioning and evaluation protocols, leading to unfair and biased results. Previous studies claim continuous performance improvements, but our findings challenge such assertions. Using these insights, we propose a unified protocol for database partitioning that ensures fairness and comparability. Specifically, our contributions include extending detailed demographic annotations (in terms of race, gender, and age) for six commonly used affective databases, providing fairness evaluation metrics, and establishing a common framework for expression recognition, action unit detection, and valence-arousal estimation. Additionally, we conduct extensive experiments using state-of-the-art and baseline methods under the new protocol, revealing previously unobserved fairness discrepancies and biases. We also rerun the methods with the new protocol and introduce new leaderboards to encourage future research in affect recognition with fairer comparisons. Our annotations, codes and pre-trained models are available here.","PeriodicalId":73307,"journal":{"name":"IEEE transactions on biometrics, behavior, and identity science","volume":"7 4","pages":"914-923"},"PeriodicalIF":5.0,"publicationDate":"2025-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145134919","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Detecting Near-Duplicate Face Images","authors":"Sudipta Banerjee;Arun Ross","doi":"10.1109/TBIOM.2025.3548541","DOIUrl":"https://doi.org/10.1109/TBIOM.2025.3548541","url":null,"abstract":"Near-duplicate images are often generated when applying repeated photometric and geometric transformations that produce imperceptible variants of the original image. Consequently, a deluge of near-duplicates can be circulated online posing copyright infringement concerns. The concerns are more severe when biometric data is altered through such nuanced transformations. In this work, we address the challenge of near-duplicate detection in face images by, firstly, identifying the original image from a set of near-duplicates and, secondly, deducing the relationship between the original image and the near-duplicates. We construct a tree-like structure, called an Image Phylogeny Tree (IPT) using a graph-theoretic approach to estimate the relationship, i.e., determine the sequence in which they have been generated. We further extend our method to create an ensemble of IPTs known as Image Phylogeny Forests (IPFs). We rigorously evaluate our method to demonstrate robustness across other modalities, unseen transformations by latest generative models and IPT configurations, thereby significantly advancing the state-of-the-art performance by ~42% on IPF reconstruction accuracy. Our code is publicly available at <uri>https://github.com/sudban3089/DetectingNear-Duplicates</uri>.","PeriodicalId":73307,"journal":{"name":"IEEE transactions on biometrics, behavior, and identity science","volume":"7 3","pages":"498-511"},"PeriodicalIF":0.0,"publicationDate":"2025-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144492270","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abdarahmane Wone;Joël Di Manno;Christophe Charrier;Christophe Rosenberger
{"title":"Fingerprint Spoof Generation Using Style Transfer","authors":"Abdarahmane Wone;Joël Di Manno;Christophe Charrier;Christophe Rosenberger","doi":"10.1109/TBIOM.2025.3545308","DOIUrl":"https://doi.org/10.1109/TBIOM.2025.3545308","url":null,"abstract":"Nowadays, biometrics is becoming more and more present in our everyday lives. They are used in ID documents, border controls, authentication, and e-payment, etc. Therefore, ensuring the security of biometric systems has become a major concern. The certification process aims at qualifying the behavior of a biometric system and verifying its conformity to international specifications. It involves the evaluation of the system performance and its robustness to attacks. Anti-spoofing tests require the creation of physical presentation attack instruments (PAIs), which are used to evaluate the robustness of biometric systems against spoofing through multiple attempts of testing on the device. In this article, we propose a new solution based on deep learning to generate synthetic fingerprint spoof images from a small dataset of real-life images acquired by a specific sensor. We artificially modify these images to simulate how they would appear if generated from known spoof materials usually involved in fingerprint spoofing tests. Experiments on LivDet datasets show first, that synthetic fingerprint spoof images give similar performance to real-life ones from a matching point of view only and second, that injection attacks succeed 50% of the time for most of the materials we tested.","PeriodicalId":73307,"journal":{"name":"IEEE transactions on biometrics, behavior, and identity science","volume":"7 3","pages":"512-523"},"PeriodicalIF":0.0,"publicationDate":"2025-02-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144492285","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"PulseFormer: Continuous Remote Heart Rate Measurement Through Zoomed Time-Spectral Attention","authors":"Joaquim Comas;Adrià Ruiz;Federico Sukno","doi":"10.1109/TBIOM.2025.3544647","DOIUrl":"https://doi.org/10.1109/TBIOM.2025.3544647","url":null,"abstract":"Despite the recent advances in remote heart rate measurement, most improvements primarily focus on recovering the rPPG signal, often overlooking the inherent challenges of estimating heart rate (HR) from the derived signal. Furthermore, most existing methods adopt the average HR per video to assess model performance, thus relying on rather large temporal windows to produce a single estimate; this hampers their applicability to scenarios in which the continuous monitoring of a patient’s physiological status is crucial. Besides, this evaluation approach can also lead to biased performance assessments due to low continuous precision, as it considers only the mean value of the entire video. In this paper, we present the PulseFormer, a novel continuous deep estimator for remote HR. Our proposed method utilizes a time-frequency attention block that leverages the enhanced resolution properties of the Chirp-Z Transform (CZT) to accurately estimate HR from the recovered low-resolution signal using a reduced temporal window size. We validate the effectiveness of our model on the large-scale Vision-for-Vitals (V4V) benchmark, designed for continuous physiological signals estimation from facial videos. The results reveal outstanding frame-to-frame HR estimation capabilities, establishing the proposed approach as a robust and versatile estimator that could be used with any rPPG method.","PeriodicalId":73307,"journal":{"name":"IEEE transactions on biometrics, behavior, and identity science","volume":"7 4","pages":"876-889"},"PeriodicalIF":5.0,"publicationDate":"2025-02-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10899865","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145134935","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Aman Bhatta;Domingo Mery;Haiyu Wu;Joyce Annan;Michael C. King;Kevin W. Bowyer
{"title":"What’s Color Got to Do With It? Face Recognition in Grayscale","authors":"Aman Bhatta;Domingo Mery;Haiyu Wu;Joyce Annan;Michael C. King;Kevin W. Bowyer","doi":"10.1109/TBIOM.2025.3542316","DOIUrl":"https://doi.org/10.1109/TBIOM.2025.3542316","url":null,"abstract":"State-of-the-art deep CNN face matchers are typically created using extensive training sets of color face images. Our study reveals that such matchers attain virtually identical accuracy when trained on either grayscale or color versions of the training set, even when the evaluation is done using color test images. Furthermore, we demonstrate that shallower models, lacking the capacity to model complex representations, rely more heavily on low-level features such as those associated with color. As a result, they display diminished accuracy when trained with grayscale images. We then consider possible causes for deeper CNN face matchers “not seeing color”. Popular Web-scraped face datasets actually have 30 to 60% of their identities with one or more grayscale images. We analyze whether this grayscale element in the training set impacts the accuracy achieved, and conclude that it does not. We demonstrate that using only grayscale images for both training and testing achieves accuracy comparable to that achieved using only color images for deeper models. This holds true for both real and synthetic training datasets. HSV color space, which separates chroma and luma information, does not improve the network’s learning about color any more than in the RGB color space. We then show that the skin region of an individual’s images in a Web-scraped training set exhibits significant variation in their mapping to color space. This suggests that color carries limited identity-specific information. We also show that when the first convolution layer is restricted to a single filter, models learn a grayscale conversion filter and pass a grayscale version of the input color image to the next layer. Finally, we demonstrate that leveraging the lower per-image storage for grayscale to increase the number of images in the training set can improve accuracy of face recognition.","PeriodicalId":73307,"journal":{"name":"IEEE transactions on biometrics, behavior, and identity science","volume":"7 3","pages":"484-497"},"PeriodicalIF":0.0,"publicationDate":"2025-02-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144492417","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"WePerson: Generalizable Re-Identification From Synthetic Data With Single Query Adaptation","authors":"He Li;Mang Ye;Kehua Su;Bo Du","doi":"10.1109/TBIOM.2025.3540919","DOIUrl":"https://doi.org/10.1109/TBIOM.2025.3540919","url":null,"abstract":"Person re-identification (ReID) aims to retrieve a target person across non-overlapping cameras. Due to the uncontrollable environment and the privacy concerns, the diversity and scale of real-world training data are usually limited, resulting in poor testing generalizability. To overcome these problems, we introduce a large-scale Weather Person dataset that generates synthetic images with different weather conditions, complex scenes, natural lighting changes, and various pedestrian accessories in a simulated camera network. The environment is fully controllable, supporting factor-by-factor analysis. To narrow the gap between synthetic data and real-world scenarios, this paper introduces a simple yet efficient domain generalization method via Single Query Adaptation (SQA), calibrating the statistics and transformation parameters in BatchNorm layers with only a single query image in the target domain. This significantly improves performance through a single adaptation epoch, greatly boosting the applicability of the ReID technique for intelligent surveillance systems. Abundant experiment results demonstrate that the WePerson dataset achieves superior performance under direct transfer setting without any real-world data training. In addition, the proposed SQA method shows amazing robustness in real-to-real, synthetic-to-real ReID, and various corruption settings. Dataset and code are available at <uri>https://github.com/lihe404/WePerson</uri>.","PeriodicalId":73307,"journal":{"name":"IEEE transactions on biometrics, behavior, and identity science","volume":"7 3","pages":"458-470"},"PeriodicalIF":0.0,"publicationDate":"2025-02-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144492272","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Wang Yao;Muhammad Ali Farooq;Joseph Lemley;Peter Corcoran
{"title":"Synthetic Face Ageing: Evaluation, Analysis and Facilitation of Age-Robust Facial Recognition Algorithms","authors":"Wang Yao;Muhammad Ali Farooq;Joseph Lemley;Peter Corcoran","doi":"10.1109/TBIOM.2025.3536622","DOIUrl":"https://doi.org/10.1109/TBIOM.2025.3536622","url":null,"abstract":"Establishing the identity of an individual from their facial data is widely adopted across the consumer sector, driven by the use of facial authentication on handheld devices. This widespread use of facial authentication technology has raised other issues, in particular those of biases in the underlying algorithms. Initial studies focused on ethnic or gender biases, but another area is that of age-related biases. This research work focuses on the challenge of face recognition over decades-long time intervals and explores the feasibility of utilizing synthetic ageing data to improve the robustness of face recognition models in recognizing people across these longer time intervals. To achieve this, we first design a set of experiments to evaluate state-of-the-art synthetic ageing methods. In the next stage, we explore the effect of age intervals on a reference face recognition algorithm using both synthetic and real ageing data to perform rigorous validation. We then use these synthetic age data as an augmentation method to facilitate the age-invariant face recognition algorithm. Extensive experimental results demonstrate a notable improvement in the recognition rate of the model trained on synthetic ageing images, with an increase of 3.33% compared to the baseline model when tested on images with a 40-year age gap. Additionally, our models exhibit competitive performance when validated on benchmark cross-age datasets and general face recognition datasets. These findings underscore the potential of synthetic age data to enhance the performance of age-invariant face recognition systems.","PeriodicalId":73307,"journal":{"name":"IEEE transactions on biometrics, behavior, and identity science","volume":"7 3","pages":"471-483"},"PeriodicalIF":0.0,"publicationDate":"2025-01-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10858190","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144492306","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}