AFR-Net: Attention-Driven Fingerprint Recognition Network

Steven A. Grosz;Anil K. Jain
{"title":"AFR-Net: Attention-Driven Fingerprint Recognition Network","authors":"Steven A. Grosz;Anil K. Jain","doi":"10.1109/TBIOM.2023.3317303","DOIUrl":null,"url":null,"abstract":"The use of vision transformers (ViT) in computer vision is increasing due to its limited inductive biases (e.g., locality, weight sharing, etc.) and increased scalability compared to other deep learning models. This has led to some initial studies on the use of ViT for biometric recognition, including fingerprint recognition. In this work, we improve on these initial studies by i.) evaluating additional attention-based architectures, ii.) scaling to larger and more diverse training and evaluation datasets, and iii.) combining the complimentary representations of attention-based and CNN-based embeddings for improved state-of-the-art (SOTA) fingerprint recognition (both authentication and identification). Our combined architecture, AFR-Net (Attention-Driven Fingerprint Recognition Network), outperforms several baseline models, including a SOTA commercial fingerprint system by Neurotechnology, Verifinger v12.3, across intra-sensor, cross-sensor, and latent to rolled fingerprint matching datasets. Additionally, we propose a realignment strategy using local embeddings extracted from intermediate feature maps within the networks to refine the global embeddings in low certainty situations, which boosts the overall recognition accuracy significantly. This realignment strategy requires no additional training and can be applied as a wrapper to any existing deep learning network (including attention-based, CNN-based, or both) to boost its performance in a variety of computer vision tasks.","PeriodicalId":73307,"journal":{"name":"IEEE transactions on biometrics, behavior, and identity science","volume":"6 1","pages":"30-42"},"PeriodicalIF":0.0000,"publicationDate":"2023-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on biometrics, behavior, and identity science","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10255275/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

The use of vision transformers (ViT) in computer vision is increasing due to its limited inductive biases (e.g., locality, weight sharing, etc.) and increased scalability compared to other deep learning models. This has led to some initial studies on the use of ViT for biometric recognition, including fingerprint recognition. In this work, we improve on these initial studies by i.) evaluating additional attention-based architectures, ii.) scaling to larger and more diverse training and evaluation datasets, and iii.) combining the complimentary representations of attention-based and CNN-based embeddings for improved state-of-the-art (SOTA) fingerprint recognition (both authentication and identification). Our combined architecture, AFR-Net (Attention-Driven Fingerprint Recognition Network), outperforms several baseline models, including a SOTA commercial fingerprint system by Neurotechnology, Verifinger v12.3, across intra-sensor, cross-sensor, and latent to rolled fingerprint matching datasets. Additionally, we propose a realignment strategy using local embeddings extracted from intermediate feature maps within the networks to refine the global embeddings in low certainty situations, which boosts the overall recognition accuracy significantly. This realignment strategy requires no additional training and can be applied as a wrapper to any existing deep learning network (including attention-based, CNN-based, or both) to boost its performance in a variety of computer vision tasks.
AFR-Net:注意力驱动的指纹识别网络
与其他深度学习模型相比,视觉变换器(ViT)具有有限的归纳偏差(如局部性、权重共享等)和更高的可扩展性,因此在计算机视觉领域的应用越来越多。这导致了一些将 ViT 用于生物识别(包括指纹识别)的初步研究。在这项工作中,我们通过以下方式改进了这些初步研究:i.) 评估其他基于注意力的架构;ii.) 扩展到更大、更多样化的训练和评估数据集;iii.) 结合基于注意力和基于 CNN 的嵌入的互补表示,以改进最先进的(SOTA)指纹识别(包括认证和识别)。我们的组合架构 AFR-Net(注意力驱动指纹识别网络)在传感器内、跨传感器和潜伏指纹与滚动指纹匹配数据集上的表现优于多个基准模型,包括 Neurotechnology 公司的 SOTA 商业指纹系统 Verifinger v12.3。此外,我们还提出了一种重新调整策略,利用从网络内的中间特征图中提取的局部嵌入来完善低确定性情况下的全局嵌入,从而显著提高整体识别准确率。这种重新调整策略不需要额外的训练,可以作为包装器应用于任何现有的深度学习网络(包括基于注意力的网络、基于 CNN 的网络或两者兼而有之),以提高其在各种计算机视觉任务中的性能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
10.90
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信