IEEE transactions on biometrics, behavior, and identity science最新文献

筛选
英文 中文
IEEE Transactions on Biometrics, Behavior, and Identity Science Publication Information 电气和电子工程师学会生物统计、行为和身份科学期刊》(IEEE Transactions on Biometrics, Behavior, and Identity Science)出版信息
IEEE transactions on biometrics, behavior, and identity science Pub Date : 2024-04-03 DOI: 10.1109/TBIOM.2024.3378798
{"title":"IEEE Transactions on Biometrics, Behavior, and Identity Science Publication Information","authors":"","doi":"10.1109/TBIOM.2024.3378798","DOIUrl":"https://doi.org/10.1109/TBIOM.2024.3378798","url":null,"abstract":"","PeriodicalId":73307,"journal":{"name":"IEEE transactions on biometrics, behavior, and identity science","volume":"6 2","pages":"C2-C2"},"PeriodicalIF":0.0,"publicationDate":"2024-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10490304","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140345473","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
IEEE Transactions on Biometrics, Behavior, and Identity Science Information for Authors IEEE Transactions on Biometrics, Behavior, and Identity Science 给作者的信息
IEEE transactions on biometrics, behavior, and identity science Pub Date : 2024-04-03 DOI: 10.1109/TBIOM.2024.3378799
{"title":"IEEE Transactions on Biometrics, Behavior, and Identity Science Information for Authors","authors":"","doi":"10.1109/TBIOM.2024.3378799","DOIUrl":"https://doi.org/10.1109/TBIOM.2024.3378799","url":null,"abstract":"","PeriodicalId":73307,"journal":{"name":"IEEE transactions on biometrics, behavior, and identity science","volume":"6 2","pages":"C3-C3"},"PeriodicalIF":0.0,"publicationDate":"2024-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10490307","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140345491","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Multi-Stage Adaptive Feature Fusion Neural Network for Multimodal Gait Recognition 用于多模态步态识别的多阶段自适应特征融合神经网络
IEEE transactions on biometrics, behavior, and identity science Pub Date : 2024-04-03 DOI: 10.1109/TBIOM.2024.3384704
Shinan Zou;Jianbo Xiong;Chao Fan;Chuanfu Shen;Shiqi Yu;Jin Tang
{"title":"A Multi-Stage Adaptive Feature Fusion Neural Network for Multimodal Gait Recognition","authors":"Shinan Zou;Jianbo Xiong;Chao Fan;Chuanfu Shen;Shiqi Yu;Jin Tang","doi":"10.1109/TBIOM.2024.3384704","DOIUrl":"https://doi.org/10.1109/TBIOM.2024.3384704","url":null,"abstract":"Gait recognition is a biometric technology that has received extensive attention. Most existing gait recognition algorithms are unimodal, and a few multimodal gait recognition algorithms perform multimodal fusion only once. None of these algorithms may fully exploit the complementary advantages of the multiple modalities. In this paper, by considering the temporal and spatial characteristics of gait data, we propose a multi-stage feature fusion strategy (MSFFS), which performs multimodal fusions at different stages in the feature extraction process. Also, we propose an adaptive feature fusion module (AFFM) that considers the semantic association between silhouettes and skeletons. The fusion process fuses different silhouette areas with their more related skeleton joints. Since visual appearance changes and time passage co-occur in a gait period, we propose a multiscale spatial-temporal feature extractor (MSSTFE) to learn the spatial-temporal linkage features thoroughly. Specifically, MSSTFE extracts and aggregates spatial-temporal linkages information at different spatial scales. Combining the strategy and modules mentioned above, we propose a multi-stage adaptive feature fusion (MSAFF) neural network, which shows state-of-the-art performance in many experiments on three datasets. Besides, MSAFF is equipped with feature dimensional pooling (FD Pooling), which can significantly reduce the dimension of the gait representations without hindering the accuracy. The code can be found here. \u0000<uri>https://github.com/ShinanZou/MSAFF</uri>\u0000.","PeriodicalId":73307,"journal":{"name":"IEEE transactions on biometrics, behavior, and identity science","volume":"6 4","pages":"539-549"},"PeriodicalIF":0.0,"publicationDate":"2024-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142713947","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ExplaNET: A Descriptive Framework for Detecting Deepfakes With Interpretable Prototypes ExplaNET:利用可解释原型检测深度伪造的描述性框架
IEEE transactions on biometrics, behavior, and identity science Pub Date : 2024-03-30 DOI: 10.1109/TBIOM.2024.3407650
Fatima Khalid;Ali Javed;Khalid Mahmood Malik;Aun Irtaza
{"title":"ExplaNET: A Descriptive Framework for Detecting Deepfakes With Interpretable Prototypes","authors":"Fatima Khalid;Ali Javed;Khalid Mahmood Malik;Aun Irtaza","doi":"10.1109/TBIOM.2024.3407650","DOIUrl":"https://doi.org/10.1109/TBIOM.2024.3407650","url":null,"abstract":"The emergence of deepfake videos presents a significant challenge to the integrity of visual content, with potential implications for public opinion manipulation, deception of individuals or groups, and defamation, among other concerns. Traditional methods for detecting deepfakes rely on deep learning models, lacking transparency and interpretability. To instill confidence in AI-based deepfake detection among forensic experts, we introduce a novel method called ExplaNET, which utilizes interpretable and explainable prototypes to detect deepfakes. By employing prototype-based learning, we generate a collection of representative images that encapsulate the essential characteristics of both real and deepfake images. These prototypes are then used to explain the decision-making process of our model, offering insights into the key features crucial for deepfake detection. Subsequently, we utilize these prototypes to train a classification model that achieves both accuracy and interpretability in deepfake detection. We also employ the Grad-CAM technique to generate heatmaps, highlighting the image regions contributing most significantly to the decision-making process. Through experiments conducted on datasets like FaceForensics++, Celeb-DF, and DFDC-P, our method demonstrates superior performance compared to state-of-the-art techniques in deepfake detection. Furthermore, the interpretability and explainability intrinsic to our method enhance its trustworthiness among forensic experts, owing to the transparency of our model.","PeriodicalId":73307,"journal":{"name":"IEEE transactions on biometrics, behavior, and identity science","volume":"6 4","pages":"486-497"},"PeriodicalIF":0.0,"publicationDate":"2024-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142713948","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SPADNet: Structure Prior-Aware Dynamic Network for Face Super-Resolution SPADNet:用于人脸超分辨率的结构先验感知动态网络
IEEE transactions on biometrics, behavior, and identity science Pub Date : 2024-03-28 DOI: 10.1109/TBIOM.2024.3382870
Chenyang Wang;Junjun Jiang;Kui Jiang;Xianming Liu
{"title":"SPADNet: Structure Prior-Aware Dynamic Network for Face Super-Resolution","authors":"Chenyang Wang;Junjun Jiang;Kui Jiang;Xianming Liu","doi":"10.1109/TBIOM.2024.3382870","DOIUrl":"https://doi.org/10.1109/TBIOM.2024.3382870","url":null,"abstract":"The recent emergence of deep learning neural networks has propelled advancements in the field of face super-resolution. While these deep learning-based methods have shown significant performance improvements, they depend overwhelmingly on fixed, spatially shared kernels within standard convolutional layers. This leads to a neglect of the diverse facial structures and regions, consequently struggling to reconstruct high-fidelity face images. As a highly structured object, the structural features of a face are crucial for representing and reconstructing face images. To this end, we introduce a structure prior-aware dynamic network (SPADNet) that leverages facial structure priors as a foundation to generate structure-aware dynamic kernels for the distinctive super-resolution of various face images. In view of that spatially shared kernels are not well-suited for specific-regions representation, a local structure-adaptive convolution (LSAC) is devised to characterize the local relation of facial features. It is more effective for precise texture representation. Meanwhile, a global structure-aware convolution (GSAC) is elaborated to capture the global facial contours to guarantee the structure consistency. These strategies form a unified face reconstruction framework, which reconciles the distinct representation of diverse face images and individual structure fidelity. Extensive experiments confirm the superiority of our proposed SPADNet over state-of-the-art methods. The source codes of the proposed method will be available at \u0000<uri>https://github.com/wcy-cs/SPADNet</uri>\u0000.","PeriodicalId":73307,"journal":{"name":"IEEE transactions on biometrics, behavior, and identity science","volume":"6 3","pages":"326-340"},"PeriodicalIF":0.0,"publicationDate":"2024-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141725600","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Exploring Fusion Techniques and Explainable AI on Adapt-FuseNet: Context-Adaptive Fusion of Face and Gait for Person Identification 在 Adapt-FuseNet 上探索融合技术和可解释人工智能:上下文自适应融合人脸和步态以进行人员识别
IEEE transactions on biometrics, behavior, and identity science Pub Date : 2024-03-27 DOI: 10.1109/TBIOM.2024.3405081
Thejaswin S;Ashwin Prakash;Athira Nambiar;Alexandre Bernadino
{"title":"Exploring Fusion Techniques and Explainable AI on Adapt-FuseNet: Context-Adaptive Fusion of Face and Gait for Person Identification","authors":"Thejaswin S;Ashwin Prakash;Athira Nambiar;Alexandre Bernadino","doi":"10.1109/TBIOM.2024.3405081","DOIUrl":"https://doi.org/10.1109/TBIOM.2024.3405081","url":null,"abstract":"Biometrics such as human gait and face play a significant role in vision-based surveillance applications. However, multimodal fusion of biometric features is a challenging task in non-controlled environments due to varying reliability of the features from different modalities in changing contexts, such as viewpoints, illuminations, occlusion, background clutter, and clothing. For instance, in person identification in the wild, facial and gait features play a complementary role, as, in principle, face provides more discriminatory features than gait if the person is frontal to the camera, while gait features are more discriminative in lateral views. Classical fusion techniques typically address this problem by explicitly computing in which context the data is obtained (e.g., frontal or lateral) and designing custom data fusion strategies for each context. However, this requires an initial enumeration of all the possible contexts and the design of context “detectors”, which bring their own challenges. Hence, how to effectively utilize both facial and gait information in arbitrary conditions is still an open problem. In this paper we present a context-adaptive multi-biometric fusion strategy that does not require the prior determination of context features; instead, the context is implicitly encoded in the fusion process by a set of attentional weights that encode the relevance of the different modalities for each particular data sample. The key contributions of the paper are threefold. First, we propose a novel framework for the dynamic fusion of multiple biometrics modalities leveraging attention techniques, denoted ‘Adapt-FuseNet’. Second, we perform an extensive evaluation of the proposed method in comparison to various other fusion techniques such as Bilinear Pooling, Parallel Co-attention, Keyless Attention, Multi-modal Factorized High-order Pooling, and Multimodal Tucker Fusion. Third, an Explainable Artificial Intelligence-based interpretation tool is used to analyse how the attention mechanism of ‘Adapt-FuseNet’ is capturing context implicitly and making the best weighting of the different modalities for the task at hand. This enables the interpretability of results in a more human-compliant way, hence boosting our confidence of the operation of AI systems in the wild. Extensive experiments are carried out on two public gait datasets (CASIA-A and CASIA-B), showing that ‘Adapt-FuseNet’ significantly outperforms the state-of-the-art.","PeriodicalId":73307,"journal":{"name":"IEEE transactions on biometrics, behavior, and identity science","volume":"6 4","pages":"515-527"},"PeriodicalIF":0.0,"publicationDate":"2024-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142713952","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Balancing Accuracy and Error Rates in Fingerprint Verification Systems Under Presentation Attacks With Sequential Fusion 利用序列融合技术平衡呈现攻击下指纹验证系统的准确率和错误率
IEEE transactions on biometrics, behavior, and identity science Pub Date : 2024-03-27 DOI: 10.1109/TBIOM.2024.3405554
Marco Micheletto;Gian Luca Marcialis
{"title":"Balancing Accuracy and Error Rates in Fingerprint Verification Systems Under Presentation Attacks With Sequential Fusion","authors":"Marco Micheletto;Gian Luca Marcialis","doi":"10.1109/TBIOM.2024.3405554","DOIUrl":"https://doi.org/10.1109/TBIOM.2024.3405554","url":null,"abstract":"The assessment of the fingerprint PADs embedded into a comparison system represents an emerging topic in biometric recognition. Providing models and methods for this aim helps scientists, technologists, and companies to simulate multiple scenarios and have a realistic view of the process’s consequences on the recognition system. The most recent models aimed at deriving the overall system performance, especially in the sequential assessment of the fingerprint liveness and comparison pointed out a significant decrease in Genuine Acceptance Rate (GAR). In particular, our previous studies showed that PAD contributes predominantly to this drop, regardless of the comparison system used. This paper’s goal is to establish a systematic approach for the “trade-off” computation between the gain in Impostor Attack Presentation Accept Rate (IAPAR) and the loss in GAR mentioned above. We propose a formal “trade-off” definition to measure the balance between tackling presentation attacks and the performance drop on genuine users. Experimental simulations and theoretical expectations confirm that an appropriate “trade-off” definition allows a complete view of the sequential embedding potentials.","PeriodicalId":73307,"journal":{"name":"IEEE transactions on biometrics, behavior, and identity science","volume":"6 3","pages":"409-419"},"PeriodicalIF":0.0,"publicationDate":"2024-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10539301","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141725605","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Attention Label Learning to Enhance Interactive Vein Transformer for Palm-Vein Recognition 通过注意力标签学习增强掌静脉识别的交互式静脉变换器
IEEE transactions on biometrics, behavior, and identity science Pub Date : 2024-03-26 DOI: 10.1109/TBIOM.2024.3381654
Huafeng Qin;Changqing Gong;Yantao Li;Mounim A. El-Yacoubi;Xinbo Gao;Jun Wang
{"title":"Attention Label Learning to Enhance Interactive Vein Transformer for Palm-Vein Recognition","authors":"Huafeng Qin;Changqing Gong;Yantao Li;Mounim A. El-Yacoubi;Xinbo Gao;Jun Wang","doi":"10.1109/TBIOM.2024.3381654","DOIUrl":"https://doi.org/10.1109/TBIOM.2024.3381654","url":null,"abstract":"In recent years, vein biometrics has gained significant attention due to its high security and privacy features. While deep neural networks have become the predominant classification approaches for their ability to automatically extract discriminative vein features, they still face certain drawbacks: 1) Existing transformer-based vein classifiers struggle to capture interactive information among different attention modules, limiting their feature representation capacity; 2) Current label enhancement methods, although effective in learning label distributions for classifier training, fail to model long-range relations between classes. To address these issues, we present ALE-IVT, an Attention Label Enhancement-based Interactive Vein Transformer for palm-vein recognition. First, to extract vein features, we propose an interactive vein transformer (IVT) consisting of three branches, namely spatial attention, channel attention, and convolutional module. In order to enhance performance, we integrate an interactive module that facilitates the sharing of discriminative features among the three branches. Second, we explore an attention-based label enhancement (ALE) approach to learn label distribution. ALE employs a self-attention mechanism to capture correlation between classes, enabling the inference of label distribution for classifier training. As self-attention can model long-range dependencies between classes, the resulting label distribution provides enhanced supervised information for training the vein classifier. Finally, we combine ALE with IVT to create ALE-IVT, trained in an end-to-end manner to boost the recognition accuracy of the IVT classifier. Our experiments on three public datasets demonstrate that our IVT model surpasses existing state-of-the-art vein classifiers. In addition, ALE outperforms current label enhancement approaches in term of recognition accuracy.","PeriodicalId":73307,"journal":{"name":"IEEE transactions on biometrics, behavior, and identity science","volume":"6 3","pages":"341-351"},"PeriodicalIF":0.0,"publicationDate":"2024-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141725588","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Mobile Contactless Fingerprint Presentation Attack Detection: Generalizability and Explainability 移动非接触式指纹演示攻击检测:通用性和可解释性
IEEE transactions on biometrics, behavior, and identity science Pub Date : 2024-03-21 DOI: 10.1109/TBIOM.2024.3403770
Jannis Priesnitz;Roberto Casula;Jascha Kolberg;Meiling Fang;Akhila Madhu;Christian Rathgeb;Gian Luca Marcialis;Naser Damer;Christoph Busch
{"title":"Mobile Contactless Fingerprint Presentation Attack Detection: Generalizability and Explainability","authors":"Jannis Priesnitz;Roberto Casula;Jascha Kolberg;Meiling Fang;Akhila Madhu;Christian Rathgeb;Gian Luca Marcialis;Naser Damer;Christoph Busch","doi":"10.1109/TBIOM.2024.3403770","DOIUrl":"https://doi.org/10.1109/TBIOM.2024.3403770","url":null,"abstract":"Contactless fingerprint recognition is an emerging biometric technology that has several advantages over contact-based schemes, such as improved user acceptance and fewer hygienic concerns. Like for most other biometrics, Presentation Attack Detection (PAD) is crucial to preserving the trustworthiness of contactless fingerprint recognition methods. For many contactless biometric characteristics, Convolutional Neural Networks (CNNs) represent the state-of-the-art of PAD algorithms. For CNNs, the ability to accurately classify samples that are not included in the training is of particular interest, since these generalization capabilities indicate robustness in real-world scenarios. In this work, we focus on the generalizability and explainability aspects of CNN-based contactless fingerprint PAD methods. Based on previously obtained findings, we selected four CNN-based methods for contactless fingerprint PAD: two PAD methods designed for other biometric characteristics, an algorithm for contact-based fingerprint PAD and a general-purpose ResNet18. For our evaluation, we use four databases and partition them using Leave-One-Out (LOO) protocols. Furthermore, the generalization capability to a newly captured database is tested. Moreover, we explore t-SNE plots as a means of explainability to interpret our results in more detail. The low D-EERs obtained from the LOO experiments (below 0.1% D-EER for every LOO group) indicate that the selected algorithms are well-suited for the particular application. However, with an D-EER of 4.14%, the generalization experiment still has room for improvement.","PeriodicalId":73307,"journal":{"name":"IEEE transactions on biometrics, behavior, and identity science","volume":"6 4","pages":"561-574"},"PeriodicalIF":0.0,"publicationDate":"2024-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10536028","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142713864","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MCLFIQ: Mobile Contactless Fingerprint Image Quality MCLFIQ:移动式非接触指纹图像质量
IEEE transactions on biometrics, behavior, and identity science Pub Date : 2024-03-18 DOI: 10.1109/TBIOM.2024.3377686
Jannis Priesnitz;Axel Weißenfeld;Laurenz Ruzicka;Christian Rathgeb;Bernhard Strobl;Ralph Lessmann;Christoph Busch
{"title":"MCLFIQ: Mobile Contactless Fingerprint Image Quality","authors":"Jannis Priesnitz;Axel Weißenfeld;Laurenz Ruzicka;Christian Rathgeb;Bernhard Strobl;Ralph Lessmann;Christoph Busch","doi":"10.1109/TBIOM.2024.3377686","DOIUrl":"https://doi.org/10.1109/TBIOM.2024.3377686","url":null,"abstract":"We propose MCLFIQ: Mobile Contactless Fingerprint Image Quality, the first quality assessment algorithm for mobile contactless fingerprint samples. To this end, we re-trained the NIST Fingerprint Image Quality (NFIQ) 2 method, which was originally designed for contact-based fingerprints, with a synthetic contactless fingerprint database. We evaluate the predictive performance of the resulting MCLFIQ model in terms of Error-vs.-Discard Characteristic (EDC) curves on three real-world contactless fingerprint databases using three recognition algorithms. In experiments, the MCLFIQ method is compared against the original NFIQ 2 method, a sharpness-based quality assessment algorithm developed for contactless fingerprint images and the general purpose image quality assessment method BRISQUE. Furthermore, benchmarks on four contact-based fingerprint datasets are also conducted. Obtained results show that the fine-tuning of NFIQ 2 on synthetic contactless fingerprints is a viable alternative to training on real databases. Moreover, the evaluation shows that our MCLFIQ method works more accurately and is more robust compared to all baseline methods on contactless fingerprints. We suggest considering the proposed MCLFIQ method as a starting point for the development of a new standard algorithm for contactless fingerprint quality assessment.","PeriodicalId":73307,"journal":{"name":"IEEE transactions on biometrics, behavior, and identity science","volume":"6 2","pages":"272-287"},"PeriodicalIF":0.0,"publicationDate":"2024-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10473152","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140345492","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信