IEEE transactions on biometrics, behavior, and identity science最新文献

筛选
英文 中文
IEEE Transactions on Biometrics, Behavior, and Identity Science Information for Authors IEEE生物识别、行为和身份科学信息作者汇刊
IF 5
IEEE transactions on biometrics, behavior, and identity science Pub Date : 2025-09-25 DOI: 10.1109/TBIOM.2025.3607046
{"title":"IEEE Transactions on Biometrics, Behavior, and Identity Science Information for Authors","authors":"","doi":"10.1109/TBIOM.2025.3607046","DOIUrl":"https://doi.org/10.1109/TBIOM.2025.3607046","url":null,"abstract":"","PeriodicalId":73307,"journal":{"name":"IEEE transactions on biometrics, behavior, and identity science","volume":"7 4","pages":"C3-C3"},"PeriodicalIF":5.0,"publicationDate":"2025-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11180156","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145134932","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Distillation-Guided Representation Learning for Unconstrained Video Human Authentication 无约束视频人体认证的蒸馏引导表示学习
IF 5
IEEE transactions on biometrics, behavior, and identity science Pub Date : 2025-08-04 DOI: 10.1109/TBIOM.2025.3595366
Yuxiang Guo;Siyuan Huang;Ram Prabhakar Kathirvel;Chun Pong Lau;Rama Chellappa;Cheng Peng
{"title":"Distillation-Guided Representation Learning for Unconstrained Video Human Authentication","authors":"Yuxiang Guo;Siyuan Huang;Ram Prabhakar Kathirvel;Chun Pong Lau;Rama Chellappa;Cheng Peng","doi":"10.1109/TBIOM.2025.3595366","DOIUrl":"https://doi.org/10.1109/TBIOM.2025.3595366","url":null,"abstract":"Human authentication is an important and challenging biometric task, particularly from unconstrained videos. While body recognition is a popular approach, gait recognition holds the promise of robustly identifying subjects based on walking patterns instead of appearance information. Previous gait-based approaches have performed well for curated indoor scenes; however, they tend to underperform in unconstrained situations. To address these challenges, we propose a framework, termed Holistic GAit DEtection and Recognition (H-GADER), for human authentication in challenging outdoor scenarios. Specifically, H-GADER leverages a Double Helical Signature to detect segments that contain human movement and builds discriminative features through a novel gait recognition method. To further enhance robustness, H-GADER encodes viewpoint information in its architecture, and distills learned representations from an auxiliary RGB recognition model; this allows H-GADER to learn from maximum amount of data at training time. At test time, H-GADER infers solely from the silhouette modality. Furthermore, we introduce a body recognition model through semantic, large-scale, self-supervised training to complement gait recognition. By conditionally fusing gait and body representations based on the presence/absence of gait information as decided by the gait detection, we demonstrate significant improvements compared to when a single modality or a naive feature ensemble is used. We evaluate our method on multiple existing State-of-The-Arts (SoTA) gait baselines and demonstrate consistent improvements on indoor and outdoor datasets, especially on the BRIAR dataset, which features unconstrained, long-distance videos, achieving a 28.9% improvement.","PeriodicalId":73307,"journal":{"name":"IEEE transactions on biometrics, behavior, and identity science","volume":"7 4","pages":"940-952"},"PeriodicalIF":5.0,"publicationDate":"2025-08-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11111687","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145134941","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Vein Pattern-Based Partial Finger Vein Alignment and Recognition 基于静脉模式的手指部分静脉定位与识别
IF 5
IEEE transactions on biometrics, behavior, and identity science Pub Date : 2025-07-24 DOI: 10.1109/TBIOM.2025.3592306
Enyan Li;Lu Yang;Kuikui Wang;Yongxin Wang;Yilong Yin
{"title":"Vein Pattern-Based Partial Finger Vein Alignment and Recognition","authors":"Enyan Li;Lu Yang;Kuikui Wang;Yongxin Wang;Yilong Yin","doi":"10.1109/TBIOM.2025.3592306","DOIUrl":"https://doi.org/10.1109/TBIOM.2025.3592306","url":null,"abstract":"Partial finger vein recognition is a challenging but important task in scenarios where the sensors used for user enrollment and recognition differ due to sensor upgrades, and there is a significant disparity in the imaging area between their respective imaging windows. Although the state-of-the-art recognition methods achieve promising performance on full finger vein images, they may suffer from degradation on partial finger vein images. To deal with the problem of recognizing a patch of a finger vein image, this paper proposes a vein pattern-based partial finger vein alignment and recognition method. This method employs the direction variation points as minutiae of finger vein pattern in conjunction with the vein bifurcation points and endpoints to align full and partial images. The process involves a two-stage alignment mechanism, i.e., rough alignment constrained by finger physical structure, and precise alignment determined by joint texture and location features. The candidate matching region(s) can be identified within the full gallery image corresponding to the partial probe image, and further used in subsequent minutiae and vein pattern-based recognition. Gallery images that fail to exhibit minutiae matches are classified as imposters in verification mode, or receive matching scores of zero in identification mode. The extensive experimental results on three finger vein databases demonstrate the advantage of the proposed method in partial finger vein recognition, achieving an accuracies of 97.54% on HKPU-FV, 97.22% on PLUS-LED and 97.22% on PLUS-LAS.","PeriodicalId":73307,"journal":{"name":"IEEE transactions on biometrics, behavior, and identity science","volume":"7 4","pages":"837-847"},"PeriodicalIF":5.0,"publicationDate":"2025-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145134937","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Beyond Mortality: Advancements in Post-Mortem Iris Recognition Through Data Collection and Computer-Aided Forensic Examination 超越死亡:通过数据收集和计算机辅助法医检查的死后虹膜识别的进展
IF 5
IEEE transactions on biometrics, behavior, and identity science Pub Date : 2025-07-02 DOI: 10.1109/TBIOM.2025.3585093
Rasel Ahmed Bhuyian;Parisa Farmanifard;Renu Sharma;Andrey Kuehlkamp;Aidan Boyd;Patrick J. Flynn;Kevin W. Bowyer;Arun Ross;Dennis Chute;Adam Czajka
{"title":"Beyond Mortality: Advancements in Post-Mortem Iris Recognition Through Data Collection and Computer-Aided Forensic Examination","authors":"Rasel Ahmed Bhuyian;Parisa Farmanifard;Renu Sharma;Andrey Kuehlkamp;Aidan Boyd;Patrick J. Flynn;Kevin W. Bowyer;Arun Ross;Dennis Chute;Adam Czajka","doi":"10.1109/TBIOM.2025.3585093","DOIUrl":"https://doi.org/10.1109/TBIOM.2025.3585093","url":null,"abstract":"Post-mortem iris recognition brings both hope to the forensic community (a short-term but accurate and fast means of verifying identity) as well as concerns to society (its potential illicit use in post-mortem impersonation). These hopes and concerns have grown along with the volume of research in post-mortem iris recognition. Barriers to further progress in post-mortem iris recognition include the difficult nature of data collection, and the resulting small number of approaches designed specifically for comparing iris images of deceased subjects. This paper makes several unique contributions to mitigate these barriers. First, we have collected and we offer a new dataset of NIR (compliant with ISO/IEC 19794-6 where possible) and visible-light iris images collected after demise from 259 subjects, with the largest PMI (post-mortem interval) being 1,674 hours. For one subject, the data has been collected before and after death, the first such case ever published. Second, the collected dataset was combined with publicly-available post-mortem samples to assess the current state of the art in automatic forensic iris recognition with five iris recognition methods and data originating from 338 deceased subjects. These experiments include analyses of how selected demographic factors influence recognition performance. Thirdly, this study implements a model for detecting post-mortem iris images, which can be considered as presentation attacks. Finally, we offer an open-source forensic tool integrating three post-mortem iris recognition methods with explainability elements added to make the comparison process more human-interpretable.","PeriodicalId":73307,"journal":{"name":"IEEE transactions on biometrics, behavior, and identity science","volume":"7 4","pages":"808-823"},"PeriodicalIF":5.0,"publicationDate":"2025-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145134931","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
IEEE Transactions on Biometrics, Behavior, and Identity Science Publication Information IEEE生物计量学、行为与身份科学学报
IEEE transactions on biometrics, behavior, and identity science Pub Date : 2025-06-26 DOI: 10.1109/TBIOM.2025.3577282
{"title":"IEEE Transactions on Biometrics, Behavior, and Identity Science Publication Information","authors":"","doi":"10.1109/TBIOM.2025.3577282","DOIUrl":"https://doi.org/10.1109/TBIOM.2025.3577282","url":null,"abstract":"","PeriodicalId":73307,"journal":{"name":"IEEE transactions on biometrics, behavior, and identity science","volume":"7 3","pages":"C2-C2"},"PeriodicalIF":0.0,"publicationDate":"2025-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11052648","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144492496","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
IEEE Transactions on Biometrics, Behavior, and Identity Science Information for Authors IEEE生物识别、行为和身份科学信息作者汇刊
IEEE transactions on biometrics, behavior, and identity science Pub Date : 2025-06-26 DOI: 10.1109/TBIOM.2025.3577281
{"title":"IEEE Transactions on Biometrics, Behavior, and Identity Science Information for Authors","authors":"","doi":"10.1109/TBIOM.2025.3577281","DOIUrl":"https://doi.org/10.1109/TBIOM.2025.3577281","url":null,"abstract":"","PeriodicalId":73307,"journal":{"name":"IEEE transactions on biometrics, behavior, and identity science","volume":"7 3","pages":"C3-C3"},"PeriodicalIF":0.0,"publicationDate":"2025-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11052639","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144492286","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Handling the Details: A Two-Stage Diffusion Approach to Improving Hands in Human Image Generation 处理细节:一个两阶段的扩散方法,以改善手在人体图像生成
IF 5
IEEE transactions on biometrics, behavior, and identity science Pub Date : 2025-06-05 DOI: 10.1109/TBIOM.2025.3577085
Anton Pelykh;Ozge Mercanoglu Sincan;Richard Bowden
{"title":"Handling the Details: A Two-Stage Diffusion Approach to Improving Hands in Human Image Generation","authors":"Anton Pelykh;Ozge Mercanoglu Sincan;Richard Bowden","doi":"10.1109/TBIOM.2025.3577085","DOIUrl":"https://doi.org/10.1109/TBIOM.2025.3577085","url":null,"abstract":"There has been significant progress in human image generation in recent years, particularly with the introduction of diffusion models. However, it is challenging for the existing methods to produce consistent hand anatomy, and the generated images often lack precise control over hand pose. To address this limitation, we introduce a novel two-stage approach to pose-conditioned human image generation. Firstly, we generate detailed hands and then outpaint the body around those hands. We propose training the hand generator in a multi-task setting to produce both hand image and their corresponding segmentation masks, and employ the trained model in the first stage of generation. An adapted ControlNet model is then used in the second stage to outpaint the body. We introduce a novel blending technique that combines the results of both stages in a coherent way and preserves the hand details. It involves sequential expansion of the outpainted region while fusing the latent representations, to ensure a seamless and cohesive synthesis of the final image. Experimental evaluations demonstrate the superiority of our proposed method over state-of-the-art techniques in both pose accuracy and image quality, as validated on the HaGRID and YouTube-ASL datasets. Our approach not only enhances the quality of the generated hands, but also offers improved control over hand pose, advancing the capabilities of pose-conditioned human image generation. We make the code available.","PeriodicalId":73307,"journal":{"name":"IEEE transactions on biometrics, behavior, and identity science","volume":"7 4","pages":"890-901"},"PeriodicalIF":5.0,"publicationDate":"2025-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145134940","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ClipSwap++: Improved Identity and Attributes Aware Face Swapping ClipSwap++:改进的身份和属性感知的人脸交换
IF 5
IEEE transactions on biometrics, behavior, and identity science Pub Date : 2025-06-03 DOI: 10.1109/TBIOM.2025.3576111
Phyo Thet Yee;Sudeepta Mishra;Abhinav Dhall
{"title":"ClipSwap++: Improved Identity and Attributes Aware Face Swapping","authors":"Phyo Thet Yee;Sudeepta Mishra;Abhinav Dhall","doi":"10.1109/TBIOM.2025.3576111","DOIUrl":"https://doi.org/10.1109/TBIOM.2025.3576111","url":null,"abstract":"This paper introduces an efficient framework for an identity and attributes aware face swapping. Accurately preserving the source face’s identity while maintaining the target face’s attributes remains a challenge in face swapping due to mismatches between identity and attribute features. To address this, based on our previous work, ClipSwap, we propose an extended version, ClipSwap++, with improved model efficiency with respect to inference time, memory consumption, and more accurate preservation of identity and attributes. Our model is mainly composed of a conditional Generative Adversarial Network and a CLIP-based image encoder to generate realistic face-swapped images. We carefully design our ClipSwap++ with the combination of following three components. First, we introduce the Adaptive Identity Fusion Module (AIFM), which ensures accurate preservation of identity through the careful integration of ArcFace-encoded identity with CLIP-embedded identity. Second, we propose a new decoder architecture with multiple Multi-level Attributes Integration Module (MAIM) to adaptively integrate identity and attribute features, enhancing the preservation of source face’s identity while maintaining the target image’s important attributes. Third, to enhance further the attribute preservation, we introduce Multi-level Attributes Preservation Loss, which calculates the distance between the intermediate and the final output features of the target and swapped images. We perform quantitative and qualitative evaluations using three datasets, and our model obtains the highest identity accuracy (98.93%) with low pose error (1.62) on FaceForensics++ dataset and less inference time (0.30 sec).","PeriodicalId":73307,"journal":{"name":"IEEE transactions on biometrics, behavior, and identity science","volume":"7 4","pages":"862-875"},"PeriodicalIF":5.0,"publicationDate":"2025-06-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145134936","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
OpenThermalPose2: Extending the Open-Source Annotated Thermal Human Pose Dataset With More Data, Subjects, and Poses OpenThermalPose2:用更多的数据、主题和姿势扩展开源注释热人体姿势数据集
IF 5
IEEE transactions on biometrics, behavior, and identity science Pub Date : 2025-06-02 DOI: 10.1109/TBIOM.2025.3575499
Askat Kuzdeuov;Miras Zakaryanov;Alim Tleuliyev;Huseyin Atakan Varol
{"title":"OpenThermalPose2: Extending the Open-Source Annotated Thermal Human Pose Dataset With More Data, Subjects, and Poses","authors":"Askat Kuzdeuov;Miras Zakaryanov;Alim Tleuliyev;Huseyin Atakan Varol","doi":"10.1109/TBIOM.2025.3575499","DOIUrl":"https://doi.org/10.1109/TBIOM.2025.3575499","url":null,"abstract":"Human pose estimation has many applications in action recognition, human-robot interaction, motion capture, augmented reality, sports analytics, and healthcare. Numerous datasets and deep learning models have been developed for human pose estimation within the visible domain. However, poor lighting conditions and privacy issues persist. These challenges can be addressed using thermal cameras; however, there is a limited number of annotated thermal human pose datasets for training deep learning models. Previously, we presented the OpenThermalPose dataset with 6,090 thermal images of 31 subjects and 14,315 annotated human instances. In this work, we extend OpenThermalPose with more thermal images, human instances, and poses. The extended dataset, OpenThermalPose2, contains 21,125 elaborately annotated human instances within 11,391 thermal images of 170 subjects. To show the efficacy of OpenThermalPose2, we trained the YOLOv8-pose and YOLO11-pose models on the dataset. The experimental results showed that models trained with OpenThermalPose2 outperformed the previous YOLOv8-pose models trained with OpenThermalPose. Additionally, we optimized the YOLO11-pose models trained on OpenThermalPose2 by converting their checkpoints from PyTorch to TensorRT formats. We deployed the PyTorch and TensorRT models on an NVIDIA Jetson AGX Orin 64GB and measured their inference time and accuracy. The TensorRT models using half-precision floating-point (FP16) achieved the best balance between speed and accuracy, making them suitable for real-time applications. We have made the dataset, source code, and pre-trained models publicly available at <uri>https://github.com/IS2AI/OpenThermalPose</uri> to bolster research in this field.","PeriodicalId":73307,"journal":{"name":"IEEE transactions on biometrics, behavior, and identity science","volume":"7 4","pages":"902-913"},"PeriodicalIF":5.0,"publicationDate":"2025-06-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145134933","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CattleDiT: A Distillation-Driven Transformer for Cattle Identification catledit:用于牛类识别的蒸馏驱动变压器
IF 5
IEEE transactions on biometrics, behavior, and identity science Pub Date : 2025-04-29 DOI: 10.1109/TBIOM.2025.3565516
Niraj Kumar;Sanjay Kumar Singh
{"title":"CattleDiT: A Distillation-Driven Transformer for Cattle Identification","authors":"Niraj Kumar;Sanjay Kumar Singh","doi":"10.1109/TBIOM.2025.3565516","DOIUrl":"https://doi.org/10.1109/TBIOM.2025.3565516","url":null,"abstract":"Rising standards for biosecurity, disease prevention, and livestock tracing are driving the need for an efficient identification system within the livestock supply chain. Traditional methods for cattle identification are invasive and unreliable due to issues like fraud, theft, and duplication. While deep learning-based methods, particularly Vision Transformers (ViTs), have demonstrated superior accuracy compared to traditional Convolutional Neural Networks (CNNs), but they require significantly larger datasets for training and have high computational demands. To address the challenges of large data requirements and to achieve faster convergence with fewer parameters, this paper proposes a novel distillation-based transformer approach for cattle identification. In this paper, we extract the muzzle region from a publicly available front-face cattle image dataset containing 300 cattle-face data and perform a distillation process to ensure that the student transformer model effectively learns from the teacher model through a proposed Adaptive Stochastic Depth mechanism. The teacher model, based on a lightweight custom convolutional network, extracts key features, which are then used to train the student Vision Transformer model, named CattleDiT. This approach reduces the data requirements and computational complexity of the ViT while maintaining high accuracy. The proposed model outperforms conventional ViT models and other state-of-the-art methods, achieving 99.81% accuracy on the training set and 96.67% on the test set. Additionally, several Explainable AI methods are employed to enhance interpretability of the prediction results.","PeriodicalId":73307,"journal":{"name":"IEEE transactions on biometrics, behavior, and identity science","volume":"7 4","pages":"824-836"},"PeriodicalIF":5.0,"publicationDate":"2025-04-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145134939","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信