IEEE transactions on biometrics, behavior, and identity science最新文献

筛选
英文 中文
Data and Algorithms for End-to-End Thermal Spectrum Face Verification 端到端热谱人脸验证的数据和算法
IEEE transactions on biometrics, behavior, and identity science Pub Date : 2023-09-07 DOI: 10.1109/TBIOM.2023.3304999
Thirimachos Bourlai;Jacob Rose;Suha Reddy Mokalla;Ananya Zabin;Lawrence Hornak;Christopher B. Nalty;Neehar Peri;Joshua Gleason;Carlos D. Castillo;Vishal M. Patel;Rama Chellappa
{"title":"Data and Algorithms for End-to-End Thermal Spectrum Face Verification","authors":"Thirimachos Bourlai;Jacob Rose;Suha Reddy Mokalla;Ananya Zabin;Lawrence Hornak;Christopher B. Nalty;Neehar Peri;Joshua Gleason;Carlos D. Castillo;Vishal M. Patel;Rama Chellappa","doi":"10.1109/TBIOM.2023.3304999","DOIUrl":"10.1109/TBIOM.2023.3304999","url":null,"abstract":"Despite recent advances in deep convolutional neural networks (DCNNs), low-light and nighttime face verification remains challenging. Although state-of-the-art visible-spectrum face verification methods are robust to small changes in illumination, low-light conditions make it difficult to extract discriminative features required for accurate authentication. In contrast, thermal face imagery, which captures body heat emissions, captures discriminative facial features that are invariant to lighting conditions, enabling low-light or nighttime recognition performance. However, due to the increased cost and difficulty of obtaining diverse thermal-spectrum data, directly training face verification systems on small thermal-spectrum datasets results in poor verification performance. This paper presents a synthesis-based algorithm that adapts thermal spectrum face images to the visible spectrum, allowing us to repurpose off-the-shelf visible-spectrum feature extractors without fine-tuning. Our proposed approach achieves state-of-the-art performance on the ARL-VTF dataset. Importantly, we study the impact of face alignment, pixel-level correspondence, identity classification with label smoothing, and synthetic data augmentation for multi-spectral face synthesis and verification. We show that our proposed method is widely applicable, robust, and highly effective on the ARL-VTF dataset. Finally, we present MILAB-VTF(B), a multi-distance, unconstrained thermal-visible dataset. To the best of our knowledge, it is the largest, most diverse dataset of its kind, collected in realistic conditions. We show that our end-to-end thermal-to-visible face verification system serves as a strong baseline for the MILAB-VTF(B) dataset.","PeriodicalId":73307,"journal":{"name":"IEEE transactions on biometrics, behavior, and identity science","volume":"6 1","pages":"1-14"},"PeriodicalIF":0.0,"publicationDate":"2023-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84072210","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ALGRNet: Multi-Relational Adaptive Facial Action Unit Modelling for Face Representation and Relevant Recognitions ALGRNet:多关系自适应面部动作单元建模,用于面部表征和相关识别
IEEE transactions on biometrics, behavior, and identity science Pub Date : 2023-08-21 DOI: 10.1109/TBIOM.2023.3306810
Xuri Ge;Joemon M. Jose;Pengcheng Wang;Arunachalam Iyer;Xiao Liu;Hu Han
{"title":"ALGRNet: Multi-Relational Adaptive Facial Action Unit Modelling for Face Representation and Relevant Recognitions","authors":"Xuri Ge;Joemon M. Jose;Pengcheng Wang;Arunachalam Iyer;Xiao Liu;Hu Han","doi":"10.1109/TBIOM.2023.3306810","DOIUrl":"https://doi.org/10.1109/TBIOM.2023.3306810","url":null,"abstract":"Facial action units (AUs) represent the fundamental activities of a group of muscles, exhibiting subtle changes that are useful for various face analysis tasks. One practical application in real-life situations is the automatic estimation of facial paralysis. This involves analyzing the delicate changes in facial muscle regions and skin textures. It seems logical to assess the severity of facial paralysis by combining well-defined muscle regions (similar to AUs) symmetrically, thus creating a comprehensive facial representation. To this end, we have developed a new model to estimate the severity of facial paralysis automatically and is inspired by the facial action units (FAU) recognition that deals with rich, detailed facial appearance information, such as texture, muscle status, etc. Specifically, a novel Adaptive Local-Global Relational Network (ALGRNet) is designed to adaptively mine the context of well-defined facial muscles and enhance the visual details of facial appearance and texture, which can be flexibly adapted to facial-based tasks, e.g., FAU recognition and facial paralysis estimation. ALGRNet consists of three key structures: (i) an adaptive region learning module that identifies high-potential muscle response regions, (ii) a skip-BiLSTM that models the latent relationships among local regions, enabling better correlation between multiple regional lesion muscles and texture changes, and (iii) a feature fusion&refining module that explores the complementarity between the local and global aspects of the face. We have extensively evaluated ALGRNet to demonstrate its effectiveness using two widely recognized AU benchmarks, BP4D and DISFA. Furthermore, to assess the efficacy of FAUs in subsequent applications, we have investigated their application in the identification of facial paralysis. Experimental findings obtained from a facial paralysis benchmark, meticulously gathered and annotated by medical experts, underscore the potential of utilizing identified AU attributes to estimate the severity of facial paralysis.","PeriodicalId":73307,"journal":{"name":"IEEE transactions on biometrics, behavior, and identity science","volume":"5 4","pages":"566-578"},"PeriodicalIF":0.0,"publicationDate":"2023-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49989174","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
D-LORD: DYSL-AI Database for Low-Resolution Disguised Face Recognition D-LORD:用于低分辨率伪装人脸识别的 DYSL-AI 数据库
IEEE transactions on biometrics, behavior, and identity science Pub Date : 2023-08-18 DOI: 10.1109/TBIOM.2023.3306703
Sunny Manchanda;Kaushik Bhagwatkar;Kavita Balutia;Shivang Agarwal;Jyoti Chaudhary;Muskan Dosi;Chiranjeev Chiranjeev;Mayank Vatsa;Richa Singh
{"title":"D-LORD: DYSL-AI Database for Low-Resolution Disguised Face Recognition","authors":"Sunny Manchanda;Kaushik Bhagwatkar;Kavita Balutia;Shivang Agarwal;Jyoti Chaudhary;Muskan Dosi;Chiranjeev Chiranjeev;Mayank Vatsa;Richa Singh","doi":"10.1109/TBIOM.2023.3306703","DOIUrl":"10.1109/TBIOM.2023.3306703","url":null,"abstract":"Face recognition in a low-resolution video stream captured from a surveillance camera is a challenging problem. The problem becomes even more complicated when the subjects appearing in the video wear disguise artifacts to hide their identity or try to impersonate someone. The lack of labeled datasets restricts the current research on low-resolution face recognition systems under disguise. With this paper, we propose a large-scale database, D-LORD, that will facilitate the research on face recognition. The proposed D-LORD dataset includes high-resolution mugshot images of 2,100 individuals and 14,098 low-resolution surveillance videos, collectively containing over 1.2 million frames. Each frame in the dataset has been annotated with five facial keypoints and a single bounding box for each face. In the videos, subjects’ faces are occluded by various disguise artifacts, such as face masks, sunglasses, wigs, hats, and monkey caps. To the best of our knowledge, D-LORD is the first database to address the complex problem of low-resolution face recognition with disguise variations. We also establish the benchmark results of several state-of-the-art face detectors, frame selection algorithms, face restoration, and face verification algorithms using well-structured experimental protocols on the D-LORD dataset. The research findings indicate that the Genuine Acceptance Rate (GAR) at 1% False Acceptance Rate (FAR) varies between 86.44% and 49.45% across different disguises and distances. The dataset is publicly available to the research community at \u0000<uri>https://dyslai.org/datasets/D-LORD/</uri>\u0000.","PeriodicalId":73307,"journal":{"name":"IEEE transactions on biometrics, behavior, and identity science","volume":"6 2","pages":"147-157"},"PeriodicalIF":0.0,"publicationDate":"2023-08-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90106719","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-Day Analysis of Wrist Electromyogram-Based Biometrics for Authentication and Personal Identification 基于腕部肌电图的生物识别技术用于身份验证和个人识别的多日分析
IEEE transactions on biometrics, behavior, and identity science Pub Date : 2023-08-14 DOI: 10.1109/TBIOM.2023.3299948
Ashirbad Pradhan;Jiayuan He;Hyowon Lee;Ning Jiang
{"title":"Multi-Day Analysis of Wrist Electromyogram-Based Biometrics for Authentication and Personal Identification","authors":"Ashirbad Pradhan;Jiayuan He;Hyowon Lee;Ning Jiang","doi":"10.1109/TBIOM.2023.3299948","DOIUrl":"https://doi.org/10.1109/TBIOM.2023.3299948","url":null,"abstract":"Recently, electromyogram (EMG) has been proposed for addressing some key limitations of current biometrics. Wrist-worn wearable sensors can provide a non-invasive method for acquiring EMG signals for gesture recognition or biometric applications. EMG signals contain individuals’ information and can facilitate multi-length codes or passwords (for example, by performing a combination of hand gestures). However, current EMG-based biometric research has two critical limitations: small subject-pool for analysis and limited to single-session datasets. In this study, wrist EMG data were collected from 43 participants over three different days (Days 1, 8, and 29) while performing static hand/wrist gestures. Multi-day analysis involving training data and testing data from different days was employed to test the robustness of the EMG-based biometrics. The multi-day authentication resulted in a median equal error rate (EER) of 0.039 when the code is unknown, and an EER of 0.068 when the code is known to intruders. The multi-day identification achieved a median rank-5 accuracy of 93.0%. With intruders, a threshold-based identification resulted in a median rank-5 accuracy of 91.7% while intruders were denied access at a median rejection rate of 71.7%. These results demonstrated the potential of EMG-based biometrics in practical applications and bolster further research on EMG-based biometrics.","PeriodicalId":73307,"journal":{"name":"IEEE transactions on biometrics, behavior, and identity science","volume":"5 4","pages":"553-565"},"PeriodicalIF":0.0,"publicationDate":"2023-08-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/iel7/8423754/10273758/10216354.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49989208","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
On the Relation Between ROC and CMC 论ROC与CMC的关系
IEEE transactions on biometrics, behavior, and identity science Pub Date : 2023-07-25 DOI: 10.1109/TBIOM.2023.3298561
Raymond N. J. Veldhuis;Kiran Raja
{"title":"On the Relation Between ROC and CMC","authors":"Raymond N. J. Veldhuis;Kiran Raja","doi":"10.1109/TBIOM.2023.3298561","DOIUrl":"https://doi.org/10.1109/TBIOM.2023.3298561","url":null,"abstract":"We formulate a compact relation between the probabilistic Receiver Operating Characteristic (ROC) and the probabilistic Cumulative Match Characteristic (CMC) that predicts every entry of the probabilistic CMC as a functional on the probabilistic ROC. This result is shown to be valid for individual probabilistic ROCs and CMCs of single identities, based on the assumption that each identity has individual mated and nonmated Probabilitic Density Functions (PDF). Furthermore, it is shown that the relation still holds between the global probabilistic CMC of a gallery of identities and the average probabilistic ROC obtained by averaging the individual probabilistic ROCs of these identities involved over constant False Match Rates (FMR). We illustrate that the difference between individual probabilistic ROCs and the difference between global and average probabilistic ROCs provide an explanation for the discrepancies observed in the literature. The new formulation of the relation between probabilistic ROCs and CMCs allows us to prove that the probabilistic CMC plotted as a function of fractional rank, i.e., linearly compressed to a domain ranging from 0 to 1, will converge to the average probabilistic ROC when the gallery size increases. We illustrate our findings by experiments on synthetic and on face, fingerprint, and iris data.","PeriodicalId":73307,"journal":{"name":"IEEE transactions on biometrics, behavior, and identity science","volume":"5 4","pages":"538-552"},"PeriodicalIF":0.0,"publicationDate":"2023-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/iel7/8423754/10273758/10194409.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49989207","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Internal Structure Attention Network for Fingerprint Presentation Attack Detection From Optical Coherence Tomography 光学相干层析指纹表示攻击检测的内部结构注意网络
IEEE transactions on biometrics, behavior, and identity science Pub Date : 2023-07-13 DOI: 10.1109/TBIOM.2023.3293910
Haohao Sun;Yilong Zhang;Peng Chen;Haixia Wang;Ronghua Liang
{"title":"Internal Structure Attention Network for Fingerprint Presentation Attack Detection From Optical Coherence Tomography","authors":"Haohao Sun;Yilong Zhang;Peng Chen;Haixia Wang;Ronghua Liang","doi":"10.1109/TBIOM.2023.3293910","DOIUrl":"https://doi.org/10.1109/TBIOM.2023.3293910","url":null,"abstract":"As a non-invasive optical imaging technique, optical coherence tomography (OCT) has proven promising for automatic fingerprint recognition system (AFRS) applications. Diverse approaches have been proposed for OCT-based fingerprint presentation attack detection (PAD). However, considering the complexity and variety of PA samples, it is extremely challenging to increase the generalization ability with the limited PA dataset. To solve the challenge, this paper presents a novel supervised learning-based PAD method, denoted as internal structure attention PAD (ISAPAD). ISAPAD applies prior knowledge to guide network training. Specifically, the proposed dual-branch architecture in ISAPAD can not only learn global features from the OCT images, but also concentrate on the layered structure feature which come from the internal structure attention module (ISAM). The simple yet effective ISAM enables the network to obtain layered segmentation features exclusively belonging to Bonafide from noisy OCT volume data. By incorporating effective training strategies and PAD score generation rules, ISAPAD ensures reliable PAD performance even with limited training data. Extensive experiments and visualization analysis substantiate the effectiveness of the proposed method for OCT PAD.","PeriodicalId":73307,"journal":{"name":"IEEE transactions on biometrics, behavior, and identity science","volume":"5 4","pages":"524-537"},"PeriodicalIF":0.0,"publicationDate":"2023-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49989206","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
IEEE Transactions on Biometrics, Behavior, and Identity Science Publication Information IEEE生物计量学、行为与身份科学学报
IEEE transactions on biometrics, behavior, and identity science Pub Date : 2023-07-01 DOI: 10.1109/TBIOM.2023.3281994
{"title":"IEEE Transactions on Biometrics, Behavior, and Identity Science Publication Information","authors":"","doi":"10.1109/TBIOM.2023.3281994","DOIUrl":"https://doi.org/10.1109/TBIOM.2023.3281994","url":null,"abstract":"","PeriodicalId":73307,"journal":{"name":"IEEE transactions on biometrics, behavior, and identity science","volume":"5 3","pages":"C2-C2"},"PeriodicalIF":0.0,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/iel7/8423754/10210132/10210209.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49966612","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Best Paper Section IEEE International Conference on Automatic Face and Gesture Recognition 2021 2021年IEEE自动人脸和手势识别国际会议最佳论文部分
IEEE transactions on biometrics, behavior, and identity science Pub Date : 2023-07-01 DOI: 10.1109/TBIOM.2023.3296348
Rachael E. Jack;Vishal M. Patel;Pavan Turaga;Mayank Vatsa;Rama Chellappa;Alex Pentland;Richa Singh
{"title":"Best Paper Section IEEE International Conference on Automatic Face and Gesture Recognition 2021","authors":"Rachael E. Jack;Vishal M. Patel;Pavan Turaga;Mayank Vatsa;Rama Chellappa;Alex Pentland;Richa Singh","doi":"10.1109/TBIOM.2023.3296348","DOIUrl":"https://doi.org/10.1109/TBIOM.2023.3296348","url":null,"abstract":"The IEEE International Conference on Automatic Face and Gesture Recognition (FG) is the premier international conference on vision-based automatic face and body behavior analysis and applications. Since the first meeting in Zurich in 1994, the FG conference has grown from a biennial conference to an annual meeting, presenting the advancements and latest research developments related to face and gesture analysis. FG2021 was planned to be an in-person meeting hosted in the historic city of Jodhpur, India. However, due to the COVID-19 pandemic situation, the organizing committee decided to hold FG2021 as an online conference from December 15 to 18, 2021. Over 142 papers were presented at FG2021 and based on the reviewers and area chair recommendations, PC Chairs invited a set of top reviewed papers as part of a special issue on “Best of Face & Gesture 2021” in the IEEE Transactions on Biometrics, Behavior, and Identity Science (T-BIOM). The meticulous review process of T-BIOM ensured that significantly extended research papers that were initially presented at FG2021 are included in this special issue. The nine accepted papers can be classified into three sets: (i) algorithms with 3D information based face/motion processing, (ii) algorithms towards head pose estimation, emotion recognition, differentiable rendering, dictionary attacks, and group detection, and (iii) the student engagement dataset for affect transfer learning for behavior prediction.","PeriodicalId":73307,"journal":{"name":"IEEE transactions on biometrics, behavior, and identity science","volume":"5 3","pages":"305-307"},"PeriodicalIF":0.0,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/iel7/8423754/10210132/10210211.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49966611","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
IEEE Transactions on Biometrics, Behavior, and Identity Science Information for Authors IEEE生物识别、行为和身份科学信息作者汇刊
IEEE transactions on biometrics, behavior, and identity science Pub Date : 2023-07-01 DOI: 10.1109/TBIOM.2023.3281995
{"title":"IEEE Transactions on Biometrics, Behavior, and Identity Science Information for Authors","authors":"","doi":"10.1109/TBIOM.2023.3281995","DOIUrl":"https://doi.org/10.1109/TBIOM.2023.3281995","url":null,"abstract":"","PeriodicalId":73307,"journal":{"name":"IEEE transactions on biometrics, behavior, and identity science","volume":"5 3","pages":"C3-C3"},"PeriodicalIF":0.0,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/iel7/8423754/10210132/10210210.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49989780","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cross-Modality Channel Mixup and Modality Decorrelation for RGB-Infrared Person Re-Identification rgb -红外人员再识别的跨模态信道混频与模态去相关
IEEE transactions on biometrics, behavior, and identity science Pub Date : 2023-06-26 DOI: 10.1109/TBIOM.2023.3287275
Boyu Hua;Junyin Zhang;Ziqiang Li;Yongxin Ge
{"title":"Cross-Modality Channel Mixup and Modality Decorrelation for RGB-Infrared Person Re-Identification","authors":"Boyu Hua;Junyin Zhang;Ziqiang Li;Yongxin Ge","doi":"10.1109/TBIOM.2023.3287275","DOIUrl":"https://doi.org/10.1109/TBIOM.2023.3287275","url":null,"abstract":"This paper focuses on RGB-infrared person re-identification, which is challenged by a large modality gap between RGB and infrared images. Most existing methods attempt to learn discriminative modality-invariant features. These methods make use of identity annotations while they do not sufficiently exploit intra-modality and cross-modality sample relations using modality annotations. In this paper, we propose a Cross-modality channel Mixup and Modality Decorrelation method (CMMD) that explores sample relations at both image and feature levels. This method is designed to reduce redundant modality-specific information of the representations and highlight modality-shared information. Specifically, we first design a cross-modality channel mixup (CCM) augmentation at the image level, which combines a random RGB channel and an infrared image to generate a new one by mixup, while keeping identity information unchanged. This augmentation can be integrated into other methods easily without introducing extra parameters or models. In addition, modality decorrelation quintuplet loss (MDQL) is further presented to mine hard samples in a batch, that is, positive/negative intra/cross-modality samples, to learn modality-invariant representations in the shared latent space at the feature level. This loss suggests that the closest negative sample and the farthest positive sample should have an equal probability of appearing in both modalities. Comprehensive experimental results on two challenging datasets, i.e., SYSY-MM01 and RegDB, demonstrate competitive performance of our method with state-of-the-art ones.","PeriodicalId":73307,"journal":{"name":"IEEE transactions on biometrics, behavior, and identity science","volume":"5 4","pages":"512-523"},"PeriodicalIF":0.0,"publicationDate":"2023-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49989205","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信