IEEE transactions on biometrics, behavior, and identity science最新文献

筛选
英文 中文
Dynamic Residual Distillation Network for Face Anti-Spoofing With Feature Attention Learning 基于特征注意学习的人脸反欺骗动态残差蒸馏网络
IEEE transactions on biometrics, behavior, and identity science Pub Date : 2023-09-22 DOI: 10.1109/TBIOM.2023.3312128
Yan He;Fei Peng;Min Long
{"title":"Dynamic Residual Distillation Network for Face Anti-Spoofing With Feature Attention Learning","authors":"Yan He;Fei Peng;Min Long","doi":"10.1109/TBIOM.2023.3312128","DOIUrl":"https://doi.org/10.1109/TBIOM.2023.3312128","url":null,"abstract":"Currently, most face anti-spoofing methods target the generalization problem by relying on auxiliary information such as additional annotations and modalities. However, this auxiliary information is unavailable in practical scenarios, which potentially hinders the application of these methods. Meanwhile, the predetermined or fixed characteristics limit their generalization capability. To countermeasure these problems, a dynamic residual distillation network with feature attention learning (DRDN) is developed to adaptively search discriminative representation and embedding space without accessing any auxiliary information. Specifically, a pixel-level residual distillation module is first designed to obtain domain-irrelevant liveness representation by suppressing both the high-level semantic and low-frequency illumination factors, thus the domain divergence between the source and target domains can be adaptively mitigated. Secondly, a feature-level attention contrastive learning is proposed to construct a distance-aware asymmetrical embedding space to avoid the class boundary over-fitting. Finally, an attention enhancement backbone incorporated with attention blocks is designed for automatically capturing important regions and channels in feature extraction. Experimental results and analysis demonstrate that the proposed method outperforms the state-of-the-art anti-spoofing methods in both single-source and multi-source domain generalization scenarios.","PeriodicalId":73307,"journal":{"name":"IEEE transactions on biometrics, behavior, and identity science","volume":"5 4","pages":"579-592"},"PeriodicalIF":0.0,"publicationDate":"2023-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49989175","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AFR-Net: Attention-Driven Fingerprint Recognition Network AFR-Net:注意力驱动的指纹识别网络
IEEE transactions on biometrics, behavior, and identity science Pub Date : 2023-09-19 DOI: 10.1109/TBIOM.2023.3317303
Steven A. Grosz;Anil K. Jain
{"title":"AFR-Net: Attention-Driven Fingerprint Recognition Network","authors":"Steven A. Grosz;Anil K. Jain","doi":"10.1109/TBIOM.2023.3317303","DOIUrl":"10.1109/TBIOM.2023.3317303","url":null,"abstract":"The use of vision transformers (ViT) in computer vision is increasing due to its limited inductive biases (e.g., locality, weight sharing, etc.) and increased scalability compared to other deep learning models. This has led to some initial studies on the use of ViT for biometric recognition, including fingerprint recognition. In this work, we improve on these initial studies by i.) evaluating additional attention-based architectures, ii.) scaling to larger and more diverse training and evaluation datasets, and iii.) combining the complimentary representations of attention-based and CNN-based embeddings for improved state-of-the-art (SOTA) fingerprint recognition (both authentication and identification). Our combined architecture, AFR-Net (Attention-Driven Fingerprint Recognition Network), outperforms several baseline models, including a SOTA commercial fingerprint system by Neurotechnology, Verifinger v12.3, across intra-sensor, cross-sensor, and latent to rolled fingerprint matching datasets. Additionally, we propose a realignment strategy using local embeddings extracted from intermediate feature maps within the networks to refine the global embeddings in low certainty situations, which boosts the overall recognition accuracy significantly. This realignment strategy requires no additional training and can be applied as a wrapper to any existing deep learning network (including attention-based, CNN-based, or both) to boost its performance in a variety of computer vision tasks.","PeriodicalId":73307,"journal":{"name":"IEEE transactions on biometrics, behavior, and identity science","volume":"6 1","pages":"30-42"},"PeriodicalIF":0.0,"publicationDate":"2023-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135555656","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Progressive Direction-Aware Pose Grammar for Human Pose Estimation 用于人体姿态估计的渐进式方向感知姿态语法
IEEE transactions on biometrics, behavior, and identity science Pub Date : 2023-09-14 DOI: 10.1109/TBIOM.2023.3315509
Lu Zhou;Yingying Chen;Jinqiao Wang
{"title":"Progressive Direction-Aware Pose Grammar for Human Pose Estimation","authors":"Lu Zhou;Yingying Chen;Jinqiao Wang","doi":"10.1109/TBIOM.2023.3315509","DOIUrl":"https://doi.org/10.1109/TBIOM.2023.3315509","url":null,"abstract":"Human pose estimation is challenged by lots of factors such as complex articulation, occlusion and so on. Generally, message passing among human joints plays an important role in rectifying the wrong detection caused by referred challenges. In this paper, we propose a progressive direction-aware pose grammar model which performs message passing by building the pose grammar in a novel fashion. Firstly, a multi-scale Bi-C3D pose grammar module is proposed to promote message passing among human joints within a local range. We propose to conduct message passing by means of 3D convolution (C3D) which proves to be more effective compared with other sequential modeling techniques. To facilitate the message passing, we devise a novel adaptive direction guidance module where explicit direction information is embedded. Besides, we propose to fuse final results with attention maps to make full use of the bidirectional information and the fusion can be regarded as an ensemble process. Secondly, a more economic global regional grammar is introduced to build the relationships among human joints globally. The local-to-global modeling scheme promotes the message passing in a progressive manner and boosts the performance by a large margin. Promising results are achieved on MPII, LSP and COCO benchmarks.","PeriodicalId":73307,"journal":{"name":"IEEE transactions on biometrics, behavior, and identity science","volume":"5 4","pages":"593-605"},"PeriodicalIF":0.0,"publicationDate":"2023-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49989176","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Data and Algorithms for End-to-End Thermal Spectrum Face Verification 端到端热谱人脸验证的数据和算法
IEEE transactions on biometrics, behavior, and identity science Pub Date : 2023-09-07 DOI: 10.1109/TBIOM.2023.3304999
Thirimachos Bourlai;Jacob Rose;Suha Reddy Mokalla;Ananya Zabin;Lawrence Hornak;Christopher B. Nalty;Neehar Peri;Joshua Gleason;Carlos D. Castillo;Vishal M. Patel;Rama Chellappa
{"title":"Data and Algorithms for End-to-End Thermal Spectrum Face Verification","authors":"Thirimachos Bourlai;Jacob Rose;Suha Reddy Mokalla;Ananya Zabin;Lawrence Hornak;Christopher B. Nalty;Neehar Peri;Joshua Gleason;Carlos D. Castillo;Vishal M. Patel;Rama Chellappa","doi":"10.1109/TBIOM.2023.3304999","DOIUrl":"10.1109/TBIOM.2023.3304999","url":null,"abstract":"Despite recent advances in deep convolutional neural networks (DCNNs), low-light and nighttime face verification remains challenging. Although state-of-the-art visible-spectrum face verification methods are robust to small changes in illumination, low-light conditions make it difficult to extract discriminative features required for accurate authentication. In contrast, thermal face imagery, which captures body heat emissions, captures discriminative facial features that are invariant to lighting conditions, enabling low-light or nighttime recognition performance. However, due to the increased cost and difficulty of obtaining diverse thermal-spectrum data, directly training face verification systems on small thermal-spectrum datasets results in poor verification performance. This paper presents a synthesis-based algorithm that adapts thermal spectrum face images to the visible spectrum, allowing us to repurpose off-the-shelf visible-spectrum feature extractors without fine-tuning. Our proposed approach achieves state-of-the-art performance on the ARL-VTF dataset. Importantly, we study the impact of face alignment, pixel-level correspondence, identity classification with label smoothing, and synthetic data augmentation for multi-spectral face synthesis and verification. We show that our proposed method is widely applicable, robust, and highly effective on the ARL-VTF dataset. Finally, we present MILAB-VTF(B), a multi-distance, unconstrained thermal-visible dataset. To the best of our knowledge, it is the largest, most diverse dataset of its kind, collected in realistic conditions. We show that our end-to-end thermal-to-visible face verification system serves as a strong baseline for the MILAB-VTF(B) dataset.","PeriodicalId":73307,"journal":{"name":"IEEE transactions on biometrics, behavior, and identity science","volume":"6 1","pages":"1-14"},"PeriodicalIF":0.0,"publicationDate":"2023-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84072210","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ALGRNet: Multi-Relational Adaptive Facial Action Unit Modelling for Face Representation and Relevant Recognitions ALGRNet:多关系自适应面部动作单元建模,用于面部表征和相关识别
IEEE transactions on biometrics, behavior, and identity science Pub Date : 2023-08-21 DOI: 10.1109/TBIOM.2023.3306810
Xuri Ge;Joemon M. Jose;Pengcheng Wang;Arunachalam Iyer;Xiao Liu;Hu Han
{"title":"ALGRNet: Multi-Relational Adaptive Facial Action Unit Modelling for Face Representation and Relevant Recognitions","authors":"Xuri Ge;Joemon M. Jose;Pengcheng Wang;Arunachalam Iyer;Xiao Liu;Hu Han","doi":"10.1109/TBIOM.2023.3306810","DOIUrl":"https://doi.org/10.1109/TBIOM.2023.3306810","url":null,"abstract":"Facial action units (AUs) represent the fundamental activities of a group of muscles, exhibiting subtle changes that are useful for various face analysis tasks. One practical application in real-life situations is the automatic estimation of facial paralysis. This involves analyzing the delicate changes in facial muscle regions and skin textures. It seems logical to assess the severity of facial paralysis by combining well-defined muscle regions (similar to AUs) symmetrically, thus creating a comprehensive facial representation. To this end, we have developed a new model to estimate the severity of facial paralysis automatically and is inspired by the facial action units (FAU) recognition that deals with rich, detailed facial appearance information, such as texture, muscle status, etc. Specifically, a novel Adaptive Local-Global Relational Network (ALGRNet) is designed to adaptively mine the context of well-defined facial muscles and enhance the visual details of facial appearance and texture, which can be flexibly adapted to facial-based tasks, e.g., FAU recognition and facial paralysis estimation. ALGRNet consists of three key structures: (i) an adaptive region learning module that identifies high-potential muscle response regions, (ii) a skip-BiLSTM that models the latent relationships among local regions, enabling better correlation between multiple regional lesion muscles and texture changes, and (iii) a feature fusion&refining module that explores the complementarity between the local and global aspects of the face. We have extensively evaluated ALGRNet to demonstrate its effectiveness using two widely recognized AU benchmarks, BP4D and DISFA. Furthermore, to assess the efficacy of FAUs in subsequent applications, we have investigated their application in the identification of facial paralysis. Experimental findings obtained from a facial paralysis benchmark, meticulously gathered and annotated by medical experts, underscore the potential of utilizing identified AU attributes to estimate the severity of facial paralysis.","PeriodicalId":73307,"journal":{"name":"IEEE transactions on biometrics, behavior, and identity science","volume":"5 4","pages":"566-578"},"PeriodicalIF":0.0,"publicationDate":"2023-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49989174","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
D-LORD: DYSL-AI Database for Low-Resolution Disguised Face Recognition D-LORD:用于低分辨率伪装人脸识别的 DYSL-AI 数据库
IEEE transactions on biometrics, behavior, and identity science Pub Date : 2023-08-18 DOI: 10.1109/TBIOM.2023.3306703
Sunny Manchanda;Kaushik Bhagwatkar;Kavita Balutia;Shivang Agarwal;Jyoti Chaudhary;Muskan Dosi;Chiranjeev Chiranjeev;Mayank Vatsa;Richa Singh
{"title":"D-LORD: DYSL-AI Database for Low-Resolution Disguised Face Recognition","authors":"Sunny Manchanda;Kaushik Bhagwatkar;Kavita Balutia;Shivang Agarwal;Jyoti Chaudhary;Muskan Dosi;Chiranjeev Chiranjeev;Mayank Vatsa;Richa Singh","doi":"10.1109/TBIOM.2023.3306703","DOIUrl":"10.1109/TBIOM.2023.3306703","url":null,"abstract":"Face recognition in a low-resolution video stream captured from a surveillance camera is a challenging problem. The problem becomes even more complicated when the subjects appearing in the video wear disguise artifacts to hide their identity or try to impersonate someone. The lack of labeled datasets restricts the current research on low-resolution face recognition systems under disguise. With this paper, we propose a large-scale database, D-LORD, that will facilitate the research on face recognition. The proposed D-LORD dataset includes high-resolution mugshot images of 2,100 individuals and 14,098 low-resolution surveillance videos, collectively containing over 1.2 million frames. Each frame in the dataset has been annotated with five facial keypoints and a single bounding box for each face. In the videos, subjects’ faces are occluded by various disguise artifacts, such as face masks, sunglasses, wigs, hats, and monkey caps. To the best of our knowledge, D-LORD is the first database to address the complex problem of low-resolution face recognition with disguise variations. We also establish the benchmark results of several state-of-the-art face detectors, frame selection algorithms, face restoration, and face verification algorithms using well-structured experimental protocols on the D-LORD dataset. The research findings indicate that the Genuine Acceptance Rate (GAR) at 1% False Acceptance Rate (FAR) varies between 86.44% and 49.45% across different disguises and distances. The dataset is publicly available to the research community at \u0000<uri>https://dyslai.org/datasets/D-LORD/</uri>\u0000.","PeriodicalId":73307,"journal":{"name":"IEEE transactions on biometrics, behavior, and identity science","volume":"6 2","pages":"147-157"},"PeriodicalIF":0.0,"publicationDate":"2023-08-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90106719","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-Day Analysis of Wrist Electromyogram-Based Biometrics for Authentication and Personal Identification 基于腕部肌电图的生物识别技术用于身份验证和个人识别的多日分析
IEEE transactions on biometrics, behavior, and identity science Pub Date : 2023-08-14 DOI: 10.1109/TBIOM.2023.3299948
Ashirbad Pradhan;Jiayuan He;Hyowon Lee;Ning Jiang
{"title":"Multi-Day Analysis of Wrist Electromyogram-Based Biometrics for Authentication and Personal Identification","authors":"Ashirbad Pradhan;Jiayuan He;Hyowon Lee;Ning Jiang","doi":"10.1109/TBIOM.2023.3299948","DOIUrl":"https://doi.org/10.1109/TBIOM.2023.3299948","url":null,"abstract":"Recently, electromyogram (EMG) has been proposed for addressing some key limitations of current biometrics. Wrist-worn wearable sensors can provide a non-invasive method for acquiring EMG signals for gesture recognition or biometric applications. EMG signals contain individuals’ information and can facilitate multi-length codes or passwords (for example, by performing a combination of hand gestures). However, current EMG-based biometric research has two critical limitations: small subject-pool for analysis and limited to single-session datasets. In this study, wrist EMG data were collected from 43 participants over three different days (Days 1, 8, and 29) while performing static hand/wrist gestures. Multi-day analysis involving training data and testing data from different days was employed to test the robustness of the EMG-based biometrics. The multi-day authentication resulted in a median equal error rate (EER) of 0.039 when the code is unknown, and an EER of 0.068 when the code is known to intruders. The multi-day identification achieved a median rank-5 accuracy of 93.0%. With intruders, a threshold-based identification resulted in a median rank-5 accuracy of 91.7% while intruders were denied access at a median rejection rate of 71.7%. These results demonstrated the potential of EMG-based biometrics in practical applications and bolster further research on EMG-based biometrics.","PeriodicalId":73307,"journal":{"name":"IEEE transactions on biometrics, behavior, and identity science","volume":"5 4","pages":"553-565"},"PeriodicalIF":0.0,"publicationDate":"2023-08-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/iel7/8423754/10273758/10216354.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49989208","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
On the Relation Between ROC and CMC 论ROC与CMC的关系
IEEE transactions on biometrics, behavior, and identity science Pub Date : 2023-07-25 DOI: 10.1109/TBIOM.2023.3298561
Raymond N. J. Veldhuis;Kiran Raja
{"title":"On the Relation Between ROC and CMC","authors":"Raymond N. J. Veldhuis;Kiran Raja","doi":"10.1109/TBIOM.2023.3298561","DOIUrl":"https://doi.org/10.1109/TBIOM.2023.3298561","url":null,"abstract":"We formulate a compact relation between the probabilistic Receiver Operating Characteristic (ROC) and the probabilistic Cumulative Match Characteristic (CMC) that predicts every entry of the probabilistic CMC as a functional on the probabilistic ROC. This result is shown to be valid for individual probabilistic ROCs and CMCs of single identities, based on the assumption that each identity has individual mated and nonmated Probabilitic Density Functions (PDF). Furthermore, it is shown that the relation still holds between the global probabilistic CMC of a gallery of identities and the average probabilistic ROC obtained by averaging the individual probabilistic ROCs of these identities involved over constant False Match Rates (FMR). We illustrate that the difference between individual probabilistic ROCs and the difference between global and average probabilistic ROCs provide an explanation for the discrepancies observed in the literature. The new formulation of the relation between probabilistic ROCs and CMCs allows us to prove that the probabilistic CMC plotted as a function of fractional rank, i.e., linearly compressed to a domain ranging from 0 to 1, will converge to the average probabilistic ROC when the gallery size increases. We illustrate our findings by experiments on synthetic and on face, fingerprint, and iris data.","PeriodicalId":73307,"journal":{"name":"IEEE transactions on biometrics, behavior, and identity science","volume":"5 4","pages":"538-552"},"PeriodicalIF":0.0,"publicationDate":"2023-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/iel7/8423754/10273758/10194409.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49989207","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Internal Structure Attention Network for Fingerprint Presentation Attack Detection From Optical Coherence Tomography 光学相干层析指纹表示攻击检测的内部结构注意网络
IEEE transactions on biometrics, behavior, and identity science Pub Date : 2023-07-13 DOI: 10.1109/TBIOM.2023.3293910
Haohao Sun;Yilong Zhang;Peng Chen;Haixia Wang;Ronghua Liang
{"title":"Internal Structure Attention Network for Fingerprint Presentation Attack Detection From Optical Coherence Tomography","authors":"Haohao Sun;Yilong Zhang;Peng Chen;Haixia Wang;Ronghua Liang","doi":"10.1109/TBIOM.2023.3293910","DOIUrl":"https://doi.org/10.1109/TBIOM.2023.3293910","url":null,"abstract":"As a non-invasive optical imaging technique, optical coherence tomography (OCT) has proven promising for automatic fingerprint recognition system (AFRS) applications. Diverse approaches have been proposed for OCT-based fingerprint presentation attack detection (PAD). However, considering the complexity and variety of PA samples, it is extremely challenging to increase the generalization ability with the limited PA dataset. To solve the challenge, this paper presents a novel supervised learning-based PAD method, denoted as internal structure attention PAD (ISAPAD). ISAPAD applies prior knowledge to guide network training. Specifically, the proposed dual-branch architecture in ISAPAD can not only learn global features from the OCT images, but also concentrate on the layered structure feature which come from the internal structure attention module (ISAM). The simple yet effective ISAM enables the network to obtain layered segmentation features exclusively belonging to Bonafide from noisy OCT volume data. By incorporating effective training strategies and PAD score generation rules, ISAPAD ensures reliable PAD performance even with limited training data. Extensive experiments and visualization analysis substantiate the effectiveness of the proposed method for OCT PAD.","PeriodicalId":73307,"journal":{"name":"IEEE transactions on biometrics, behavior, and identity science","volume":"5 4","pages":"524-537"},"PeriodicalIF":0.0,"publicationDate":"2023-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49989206","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
IEEE Transactions on Biometrics, Behavior, and Identity Science Publication Information IEEE生物计量学、行为与身份科学学报
IEEE transactions on biometrics, behavior, and identity science Pub Date : 2023-07-01 DOI: 10.1109/TBIOM.2023.3281994
{"title":"IEEE Transactions on Biometrics, Behavior, and Identity Science Publication Information","authors":"","doi":"10.1109/TBIOM.2023.3281994","DOIUrl":"https://doi.org/10.1109/TBIOM.2023.3281994","url":null,"abstract":"","PeriodicalId":73307,"journal":{"name":"IEEE transactions on biometrics, behavior, and identity science","volume":"5 3","pages":"C2-C2"},"PeriodicalIF":0.0,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/iel7/8423754/10210132/10210209.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49966612","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信