{"title":"Context-Aware Person Re-Identification With Guided Prompt Pooling","authors":"Nirmala Murali;Deepak Mishra","doi":"10.1109/TBIOM.2026.3654624","DOIUrl":"https://doi.org/10.1109/TBIOM.2026.3654624","url":null,"abstract":"Text-based person re-identification (ReID) has significantly progressed using vision-language models such as CLIP. But most datasets lack textual descriptions, and manual annotation is expensive. Therefore, many recent models propose to use learnable prompts as substitutes for human annotations. However, prompt-based solutions often use identical text prompts across all classes and treat them independently, resulting in poor semantic adaptability to diverse visual features. In order to address this, we present CapReID, a context-aware prompt pooling, guided by Optimal Transport Theory based distance function. Specifically, we design a two stage training process, where the first stage is the Prompt Pooling that selects only the prompts that align well with the image context using a Wasserstein distance-based weighted pooling technique. In the second stage, we propose a negative prompting-based triplet loss for enhancing image-prompt alignment. Extensive experiments on Market-1501, MSMT17, and DukeMTMC show that CapReID achieves 94.7% Rank-1 accuracy, highlighting its superior discriminability and semantic grounding compared to prior CLIP-based baselines. Code is available at <uri>https://github.com/Nirmala891/Context-aware-prompt-based-person-re-identification</uri>","PeriodicalId":73307,"journal":{"name":"IEEE transactions on biometrics, behavior, and identity science","volume":"8 3","pages":"392-397"},"PeriodicalIF":5.0,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147685559","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yi-Peng Liu;Jiajin Qi;Jing Li;Junhao Qu;Haixia Wang
{"title":"Speckle Noise-Based Slice Generation for OCT Fingerprint Analysis","authors":"Yi-Peng Liu;Jiajin Qi;Jing Li;Junhao Qu;Haixia Wang","doi":"10.1109/TBIOM.2026.3665642","DOIUrl":"https://doi.org/10.1109/TBIOM.2026.3665642","url":null,"abstract":"Optical coherence tomography (OCT) is renowned for its high resolution and ability to capture the 3D structure of fingertip skin, significantly enhancing the anticounterfeiting capabilities of fingerprint recognition systems. However, the scarcity of OCT fingerprint datasets, exacerbated by data collection challenges and privacy concerns, poses a major hurdle for practical implementation. We propose a novel conditional diffusion model that generates highly realistic OCT fingerprints from segmentation masks, marking the first attempt to synthesize such images. By modifying the noise model in the diffusion process to account for speckle noise, our method achieves accurate noise simulation and effective removal, resulting in clearer detail feature generation. Subjective evaluations and multiple objective metrics confirm the superior visual quality and diversity of the generated images. By incorporating these images into training datasets for presentation attack detection (PAD) and fingerprint layer segmentation tasks, our method achieves pixel distributions highly consistent with bona fide fingerprints and learns detailed skin structures through segmentation mask guidance. These results highlight the potential of our approach to enhance the performance of OCT fingerprints in practical applications.","PeriodicalId":73307,"journal":{"name":"IEEE transactions on biometrics, behavior, and identity science","volume":"8 3","pages":"327-339"},"PeriodicalIF":5.0,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147685502","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kailash A. Hambarde;Nzakiese Mbongo;M. P. Pavan Kumar;Satishkumar Raghavrao Mekewad;Carolina Fernandes;Gökhan Silahtaroğlu;A. Alice Nithya;Pawan Wasnik;Md Rashidunnabi;Pranita P. Samale;Hugo Proença
{"title":"DetReIDX: A Stress-Test Dataset for Real-World UAV-Based Person Recognition","authors":"Kailash A. Hambarde;Nzakiese Mbongo;M. P. Pavan Kumar;Satishkumar Raghavrao Mekewad;Carolina Fernandes;Gökhan Silahtaroğlu;A. Alice Nithya;Pawan Wasnik;Md Rashidunnabi;Pranita P. Samale;Hugo Proença","doi":"10.1109/TBIOM.2025.3650628","DOIUrl":"https://doi.org/10.1109/TBIOM.2025.3650628","url":null,"abstract":"Person reidentification (ReID) technology is considered to perform relatively well under controlled, ground-level conditions, but also to break down when deployed in challenging real-world settings. This is due to extreme data variability factors such as resolution, viewpoint changes, scale variations, occlusions, and appearance shifts from clothing/session drifts. Also, the publicly available data sets do not realistically incorporate such kinds and magnitudes of variability, which limits the progress of this technology. This paper introduces DetReIDX, a large-scale aerial-ground person dataset, that was explicitly designed as a stress test to ReID under real-world conditions. DetReIDX is a multi-session set that includes over 18 million bounding boxes from 553 identities, collected in seven university campuses from three continents, with drone altitudes between 5.8 and 120 meters. Singularly, as a key novelty, DetReIDX subjects were recorded in (at least) two sessions on different days, with changes in clothing, daylight and location, making it suitable to actually evaluate long-term person ReID. Further, data were annotated from 16 soft biometric attributes and multitask labels for detection, tracking, ReID, and action recognition. In order to provide empirical objective evidence of DetReIDX usefulness, we considered the specific tasks of human detection, ReID and tracking, and observed that SOTA methods catastrophically degrade performance (up to 80% in detection accuracy and over 70% in Rank-1 ReID) when exposed to DetReIDX’s conditions. The dataset, annotations, and official evaluation protocols are publicly available at <uri>https://www.it.ubi.pt/DetReIDX/</uri>","PeriodicalId":73307,"journal":{"name":"IEEE transactions on biometrics, behavior, and identity science","volume":"8 3","pages":"365-377"},"PeriodicalIF":5.0,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11328787","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147685553","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"IEEE Transactions on Biometrics, Behavior, and Identity Science Publication Information","authors":"","doi":"10.1109/TBIOM.2026.3662542","DOIUrl":"https://doi.org/10.1109/TBIOM.2026.3662542","url":null,"abstract":"","PeriodicalId":73307,"journal":{"name":"IEEE transactions on biometrics, behavior, and identity science","volume":"8 2","pages":"C2-C2"},"PeriodicalIF":5.0,"publicationDate":"2026-02-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11400673","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146216652","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Fadi Boutros;Hu Han;Tempestt Neal;Vishal M. Patel;Vitomir Štruc;Yunhong Wang
{"title":"Editorial for the TBIOM Special Issue on Generative AI and Large Vision-Language Models for Biometrics","authors":"Fadi Boutros;Hu Han;Tempestt Neal;Vishal M. Patel;Vitomir Štruc;Yunhong Wang","doi":"10.1109/TBIOM.2026.3661613","DOIUrl":"https://doi.org/10.1109/TBIOM.2026.3661613","url":null,"abstract":"","PeriodicalId":73307,"journal":{"name":"IEEE transactions on biometrics, behavior, and identity science","volume":"8 2","pages":"152-153"},"PeriodicalIF":5.0,"publicationDate":"2026-02-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11400674","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146216640","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"IEEE Transactions on Biometrics, Behavior, and Identity Science Information for Authors","authors":"","doi":"10.1109/TBIOM.2026.3662543","DOIUrl":"https://doi.org/10.1109/TBIOM.2026.3662543","url":null,"abstract":"","PeriodicalId":73307,"journal":{"name":"IEEE transactions on biometrics, behavior, and identity science","volume":"8 2","pages":"C3-C3"},"PeriodicalIF":5.0,"publicationDate":"2026-02-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11400647","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146216657","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"GaitDFG: A Deformation Field-Guided Feature Learning Framework for Gait Recognition","authors":"Wei Huo;Ke Wang;Jun Tang;Yan Zhang;Feng Chen","doi":"10.1109/TBIOM.2026.3659149","DOIUrl":"https://doi.org/10.1109/TBIOM.2026.3659149","url":null,"abstract":"Gait recognition is a promising biometric recognition technique that uses walking patterns for authentication. It is known that motion representation stands as a long-term challenge for the task of gait recognition. To address this issue, most recent methods have conducted intensive studies on multi-scale temporal modeling and fine-grained spatial information aggregation, which generally characterize motion information in an implicit manner. How to quantitatively represent the change process of human body contours and dynamic motion differences remains an open problem. In this paper, we propose a novel motion representation for gait recognition stemming from deformation fields produced by the classical non-rigid point-set registration. Deformation fields are seamlessly integrated into the proposed gait recognition framework GaitDFG to yield discriminative motion features. GaitDFG mainly consists of three key components including Silhouette Feature extraction Network (SFNet), Deformation field Feature extraction Network (DFNet), and Knowledge Distillation Module (KDM). SFNet is employed to capture dynamic appearance motion difference and aggregate contextual information between neighboring frames from the input silhouette sequence. Furthermore, a multi-scale spatial perception module in DFNet is developed to extract the motion features of deformation fields to explore more useful motion clues. Besides, since real-time computation of deformation fields is infeasible in real-world scenarios, we design a deformation field feature simulation module to mimic the features of deformation fields for inference, which is learned from DFNet via knowledge distillation. Consequently, in the inference stage, we can fuse silhouette features and simulated deformation field features to perform gait recognition. Extensive experiments are conducted to validate the effectiveness of GaitDFG, demonstrating state-of-the-art performance on the standard gait recognition benchmarks, including CASIA-B (in-the-lab), GREW (in-the-wild) and CCPG (cloth-changing).","PeriodicalId":73307,"journal":{"name":"IEEE transactions on biometrics, behavior, and identity science","volume":"8 2","pages":"285-294"},"PeriodicalIF":5.0,"publicationDate":"2026-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146216632","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"IEEE Transactions on Biometrics, Behavior, and Identity Science Information for Authors","authors":"","doi":"10.1109/TBIOM.2026.3652264","DOIUrl":"https://doi.org/10.1109/TBIOM.2026.3652264","url":null,"abstract":"","PeriodicalId":73307,"journal":{"name":"IEEE transactions on biometrics, behavior, and identity science","volume":"8 1","pages":"C3-C3"},"PeriodicalIF":5.0,"publicationDate":"2026-01-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11364043","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146045359","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"IEEE Transactions on Biometrics, Behavior, and Identity Science Publication Information","authors":"","doi":"10.1109/TBIOM.2026.3652243","DOIUrl":"https://doi.org/10.1109/TBIOM.2026.3652243","url":null,"abstract":"","PeriodicalId":73307,"journal":{"name":"IEEE transactions on biometrics, behavior, and identity science","volume":"8 1","pages":"C2-C2"},"PeriodicalIF":5.0,"publicationDate":"2026-01-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11364035","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146045356","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hongda Liu;Yunfan Liu;Min Ren;Lin Sui;Yunlong Wang;Zhenan Sun
{"title":"Affinity Contrastive Learning for Skeleton-Based Human Activity Understanding","authors":"Hongda Liu;Yunfan Liu;Min Ren;Lin Sui;Yunlong Wang;Zhenan Sun","doi":"10.1109/TBIOM.2026.3652637","DOIUrl":"https://doi.org/10.1109/TBIOM.2026.3652637","url":null,"abstract":"In skeleton-based human activity understanding, existing methods often adopt the contrastive learning paradigm to construct a discriminative feature space. However, many of these approaches fail to exploit the structural inter-class similarities and overlook the impact of anomalous positive samples. In this study, we introduce ACLNet, an Affinity Contrastive Learning Network that explores the intricate clustering relationships among human activity classes to improve feature discrimination. Specifically, we propose an affinity metric to refine similarity measurements, thereby forming activity superclasses that provide more informative contrastive signals. A dynamic temperature schedule is also introduced to adaptively adjust the penalty strength for various superclasses. In addition, we employ a margin-based contrastive strategy to improve the separation of hard positive and negative samples within classes. Extensive experiments on NTU RGB+D 60, NTU RGB+D 120, Kinetics-Skeleton, PKU-MMD, FineGYM, and CASIA-B demonstrate the superiority of our method in skeleton-based action recognition, gait recognition, and person re-identification. The source code is available at <uri>https://github.com/firework8/ACLNet</uri>","PeriodicalId":73307,"journal":{"name":"IEEE transactions on biometrics, behavior, and identity science","volume":"8 2","pages":"244-254"},"PeriodicalIF":5.0,"publicationDate":"2026-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146216622","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}