IEEE transactions on biometrics, behavior, and identity science最新文献

筛选
英文 中文
IEEE Transactions on Biometrics, Behavior, and Identity Science Publication Information 电气和电子工程师学会生物统计、行为和身份科学期刊》(IEEE Transactions on Biometrics, Behavior, and Identity Science)出版信息
IEEE transactions on biometrics, behavior, and identity science Pub Date : 2024-07-18 DOI: 10.1109/TBIOM.2024.3399382
{"title":"IEEE Transactions on Biometrics, Behavior, and Identity Science Publication Information","authors":"","doi":"10.1109/TBIOM.2024.3399382","DOIUrl":"https://doi.org/10.1109/TBIOM.2024.3399382","url":null,"abstract":"","PeriodicalId":73307,"journal":{"name":"IEEE transactions on biometrics, behavior, and identity science","volume":"6 3","pages":"C2-C2"},"PeriodicalIF":0.0,"publicationDate":"2024-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10604674","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141725587","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
IEEE Transactions on Biometrics, Behavior, and Identity Science Information for Authors IEEE Transactions on Biometrics, Behavior, and Identity Science 给作者的信息
IEEE transactions on biometrics, behavior, and identity science Pub Date : 2024-07-18 DOI: 10.1109/TBIOM.2024.3399383
{"title":"IEEE Transactions on Biometrics, Behavior, and Identity Science Information for Authors","authors":"","doi":"10.1109/TBIOM.2024.3399383","DOIUrl":"https://doi.org/10.1109/TBIOM.2024.3399383","url":null,"abstract":"","PeriodicalId":73307,"journal":{"name":"IEEE transactions on biometrics, behavior, and identity science","volume":"6 3","pages":"C3-C3"},"PeriodicalIF":0.0,"publicationDate":"2024-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10604655","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141725573","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Non-Motorized Lane Target Behavior Classification Based on Millimeter Wave Radar With P-Mrca Convolutional Neural Network 基于P-Mrca卷积神经网络的毫米波雷达非机动车道目标行为分类
IEEE transactions on biometrics, behavior, and identity science Pub Date : 2024-07-15 DOI: 10.1109/TBIOM.2024.3428577
Jiaqing He;Yihan Zhu;Bing Hua;Zhihuo Xu;Yongwei Zhang;Liu Chu;Quan Shi;Robin Braun;Jiajia Shi
{"title":"Non-Motorized Lane Target Behavior Classification Based on Millimeter Wave Radar With P-Mrca Convolutional Neural Network","authors":"Jiaqing He;Yihan Zhu;Bing Hua;Zhihuo Xu;Yongwei Zhang;Liu Chu;Quan Shi;Robin Braun;Jiajia Shi","doi":"10.1109/TBIOM.2024.3428577","DOIUrl":"https://doi.org/10.1109/TBIOM.2024.3428577","url":null,"abstract":"In the fields of road regulation and road safety, the classification of target behaviors for non-motorized lanes is of great significance. However, due to the influence of adverse weather and lighting conditions on the recognition efficiency, we use radar to perform target recognition on non-motorized lanes to cope with the challenges caused by frequent traffic accidents on non-motorized lanes. In this paper, a classification and recognition method for non-motorized lane target behavior is proposed. Firstly, a radar data acquisition system is constructed to extract the micro-Doppler features of the target. Then, in view of the shortcomings of traditional deep learning networks, this paper proposes a multi-scale residual channel attention mechanism that can better perform multi-scale feature extraction and adds it to the convolutional neural network (CNN) model to construct a multi-scale residual channel attention network (MrcaNet), which can identify and classify target behaviors specific to non-motorized lanes. In order to better combine the feature information contained in the high-level features and the low-level features, MrcaNet was combined with the feature pyramid structure, and a more efficient network model feature pyramid-multi-scale residual channel attention network (P-MrcaNet) was designed. The results show that the model has the best scores on classification indexes such as accuracy, precision, recall rate, F1 value and Kappa coefficient, which are about 10% higher than traditional deep learning methods. The classification effect of this method not only performs well on this paper’s dataset, but also has good adaptability on public datasets.","PeriodicalId":73307,"journal":{"name":"IEEE transactions on biometrics, behavior, and identity science","volume":"7 1","pages":"71-81"},"PeriodicalIF":0.0,"publicationDate":"2024-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142890314","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
GaitAGE: Gait Age and Gender Estimation Based on an Age- and Gender-Specific 3D Human Model 步态:基于年龄和性别特定的3D人体模型的步态年龄和性别估计
IEEE transactions on biometrics, behavior, and identity science Pub Date : 2024-07-08 DOI: 10.1109/TBIOM.2024.3424841
Xiang Li;Yasushi Makihara;Chi Xu;Yasushi Yagi
{"title":"GaitAGE: Gait Age and Gender Estimation Based on an Age- and Gender-Specific 3D Human Model","authors":"Xiang Li;Yasushi Makihara;Chi Xu;Yasushi Yagi","doi":"10.1109/TBIOM.2024.3424841","DOIUrl":"https://doi.org/10.1109/TBIOM.2024.3424841","url":null,"abstract":"Gait-based human age and gender estimation has potential applications in visual surveillance, such as searching for specific pedestrian groups and automatically counting customers by different ages/genders. Unlike most existing methods that exploit widely used appearance-based gait features (e.g., gait energy image and silhouettes) or simple model-based gait features (e.g., leg length, stride width/frequency, and head-to-body ratio), we explore a recently popular 3D human mesh model (i.e., skinned multi-person linear model (SMPL)), which is more robust to various covariates (e.g., view angles). Furthermore, instead of the commonly used gender-neutral SMPL model, we propose a simple yet effective method to generate more realistic age- and gender-specific human mesh models by interpolating among male, female, and infant SMPL models using two learned age and gender weights. The age weight controls the proportion of importance between male/female and infant models, which is learned in a data-driven scheme by considering the paired relation between ground-truth ages and age weights. The gender weight controls the proportion of importance between male and female models, which indicates the gender probability. Then, we explore the use of generated realistic mesh models for age and gender estimation. Finally, the human mesh reconstruction and age and gender estimation modules are integrated into a unified end-to-end framework for training and testing. The experimental results on the OU-MVLP and FVG datasets demonstrated that the proposed method achieved both good mesh reconstruction and state-of-the-art age and gender estimation results.","PeriodicalId":73307,"journal":{"name":"IEEE transactions on biometrics, behavior, and identity science","volume":"7 1","pages":"47-60"},"PeriodicalIF":0.0,"publicationDate":"2024-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10589467","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142890136","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Toward Deep Face Spoofing: Taxonomy, Recent Advances, and Open Challenges 走向深度人脸欺骗:分类、最新进展和公开挑战
IEEE transactions on biometrics, behavior, and identity science Pub Date : 2024-06-20 DOI: 10.1109/TBIOM.2024.3417372
Dhimas Arief Dharmawan;Anto Satriyo Nugroho
{"title":"Toward Deep Face Spoofing: Taxonomy, Recent Advances, and Open Challenges","authors":"Dhimas Arief Dharmawan;Anto Satriyo Nugroho","doi":"10.1109/TBIOM.2024.3417372","DOIUrl":"https://doi.org/10.1109/TBIOM.2024.3417372","url":null,"abstract":"Deep neural networks are increasingly employed to create adversarial face images, aiming to deceive face recognition systems. While the majority of studies concentrate on digital attacks, their relevance extends to face spoofing. Notably, they have the capability to generate potential face images of victims when attackers lack knowledge about individuals registered in face recognition systems. Regrettably, recent advances in attacking face recognition systems using deep neural networks, their performance, and their transferability to physical attacks (deep face spoofing) lack systematic exploration. This paper addresses this gap by presenting the first comprehensive survey of current research in this domain. The review initiates with the definition of the deep face spoofing concept and introduces a pioneering taxonomy to systematically consolidate recent advances towards deep face spoofing. The main section of the paper provides in-depth evaluations of the mechanism, performance, and applicability of diverse deep neural network-based attacking algorithms against face recognition systems. Subsequently, the paper outlines current challenges in deep face spoofing, including the absence of evaluations of recent attacks against state-of-the-art face anti-spoofing algorithms and the limited transferability of recent digital attacks to physical attacks. This part also covers open challenges in deep face spoofing detection since it is crucial to note that studying various deep face spoofing algorithms should always be seen as an effort to investigate the vulnerability of face recognition systems against such evolved attacks, and not as an endeavor to gain access for illegal purposes. To enhance accessibility to a broad range of research papers in this area, an accompanying web page (\u0000<uri>https://github.com/dhimasarief/DFS_DFAS</uri>\u0000) has been established. This serves as a dynamic repository of studies focusing on deep face spoofing, continuously curated with new findings and contributions.","PeriodicalId":73307,"journal":{"name":"IEEE transactions on biometrics, behavior, and identity science","volume":"7 1","pages":"16-32"},"PeriodicalIF":0.0,"publicationDate":"2024-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142890135","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Zero-Shot Demographically Unbiased Image Generation From an Existing Biased StyleGAN 从现有有偏差的 StyleGAN 生成零镜头人口统计无偏图像
IEEE transactions on biometrics, behavior, and identity science Pub Date : 2024-06-18 DOI: 10.1109/TBIOM.2024.3416403
Anubhav Jain;Rishit Dholakia;Nasir Memon;Julian Togelius
{"title":"Zero-Shot Demographically Unbiased Image Generation From an Existing Biased StyleGAN","authors":"Anubhav Jain;Rishit Dholakia;Nasir Memon;Julian Togelius","doi":"10.1109/TBIOM.2024.3416403","DOIUrl":"https://doi.org/10.1109/TBIOM.2024.3416403","url":null,"abstract":"Face recognition systems have made significant strides thanks to data-heavy deep learning models, but these models rely on large privacy-sensitive datasets. Recent work in facial analysis and recognition have thus started making use of synthetic datasets generated from GANs and diffusion based generative models. These models, however, lack fairness in terms of demographic representation and can introduce the same biases in the trained downstream tasks. This can have serious societal and security implications. To address this issue, we propose a methodology that generates unbiased data from a biased generative model using an evolutionary algorithm. We show results for StyleGAN2 model trained on the Flicker Faces High Quality dataset to generate data for singular and combinations of demographic attributes such as Black and Woman. We generate a large racially balanced dataset of 13.5 million images, and show that it boosts the performance of facial recognition and analysis systems whilst reducing their biases. We have made our code-base (\u0000<uri>https://github.com/anubhav1997/youneednodataset</uri>\u0000) public to allow researchers to reproduce our work.","PeriodicalId":73307,"journal":{"name":"IEEE transactions on biometrics, behavior, and identity science","volume":"6 4","pages":"498-514"},"PeriodicalIF":0.0,"publicationDate":"2024-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142713945","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Sclera-TransFuse: Fusing Vision Transformer and CNN for Accurate Sclera Segmentation and Recognition Sclera-TransFuse:融合视觉变换器和 CNN,实现准确的巩膜分割和识别
IEEE transactions on biometrics, behavior, and identity science Pub Date : 2024-06-17 DOI: 10.1109/TBIOM.2024.3415484
Caiyong Wang;Haiqing Li;Yixin Zhang;Guangzhe Zhao;Yunlong Wang;Zhenan Sun
{"title":"Sclera-TransFuse: Fusing Vision Transformer and CNN for Accurate Sclera Segmentation and Recognition","authors":"Caiyong Wang;Haiqing Li;Yixin Zhang;Guangzhe Zhao;Yunlong Wang;Zhenan Sun","doi":"10.1109/TBIOM.2024.3415484","DOIUrl":"https://doi.org/10.1109/TBIOM.2024.3415484","url":null,"abstract":"This paper investigates a deep learning based unified framework for accurate sclera segmentation and recognition, named Sclera-TransFuse. Unlike previous CNN-based methods, our framework incorporates Vision Transformer and CNN to extract complementary feature representations, which are beneficial to both subtasks. Specifically, for sclera segmentation, a novel two-stream hybrid model, referred to as Sclera-TransFuse-Seg, is developed to integrate classical ResNet-34 and recently emerging Swin Transformer encoders in parallel. The dual-encoders firstly extract coarse- and fine-grained feature representations at hierarchical stages, separately. Then a Cross-Domain Fusion (CDF) module based on information interaction and self-attention mechanism is introduced to efficiently fuse the multi-scale features extracted from dual-encoders. Finally, the fused features are progressively upsampled and aggregated to predict the sclera masks in the decoder meanwhile deep supervision strategies are employed to learn intermediate feature representations better and faster. With the results of sclera segmentation, the sclera ROI image is generated for sclera feature extraction. Additionally, a new sclera recognition model, termed as Sclera-TransFuse-Rec, is proposed by combining lightweight EfficientNet B0 and multi-scale Vision Transformer in sequential to encode local and global sclera vasculature feature representations. Extensive experiments on several publicly available databases suggest that our framework consistently achieves state-of-the-art performance on various sclera segmentation and recognition benchmarks, including the 8th Sclera Segmentation and Recognition Benchmarking Competition (SSRBC 2023). A UBIRIS.v2 subset of 683 eye images with manually labeled sclera masks, and our codes are publicly available to the community through \u0000<uri>https://github.com/lhqqq/Sclera-TransFuse</uri>\u0000.","PeriodicalId":73307,"journal":{"name":"IEEE transactions on biometrics, behavior, and identity science","volume":"6 4","pages":"575-590"},"PeriodicalIF":0.0,"publicationDate":"2024-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142713911","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
WASD: A Wilder Active Speaker Detection Dataset 一个更大的主动说话人检测数据集
IEEE transactions on biometrics, behavior, and identity science Pub Date : 2024-06-11 DOI: 10.1109/TBIOM.2024.3412821
Tiago Roxo;Joana Cabral Costa;Pedro R. M. Inácio;Hugo Proença
{"title":"WASD: A Wilder Active Speaker Detection Dataset","authors":"Tiago Roxo;Joana Cabral Costa;Pedro R. M. Inácio;Hugo Proença","doi":"10.1109/TBIOM.2024.3412821","DOIUrl":"https://doi.org/10.1109/TBIOM.2024.3412821","url":null,"abstract":"Current Active Speaker Detection (ASD) models achieve good results on cooperative settings with reliable face access using only sound and facial features, which is not suited for less constrained conditions. To demonstrate this limitation of current datasets, we propose a Wilder Active Speaker Detection (WASD) dataset, with increased difficulty by targeting the key components of current ASD: audio and face. Grouped into 5 categories, WASD contains incremental challenges for ASD with tactical impairment of audio and face data, and provides a new source for ASD via subject body annotations. To highlight the new challenges of WASD, we divide it into Easy (cooperative settings) and Hard (audio and/or face are specifically degraded) groups, and assess state-of-the-art models performance in WASD and in the most challenging available ASD dataset: AVA-ActiveSpeaker. The results show that: 1) AVA-ActiveSpeaker prepares models for cooperative settings but not wilder ones (surveillance); and 2) current ASD approaches can not reliably perform in wilder settings, even if trained with challenging data. To prove the importance of body for wild ASD, we propose a baseline that complements body with face and audio information that surpass state-of-the-art models in WASD and Columbia. All contributions are available at \u0000<uri>https://github.com/Tiago-Roxo/WASD</uri>\u0000.","PeriodicalId":73307,"journal":{"name":"IEEE transactions on biometrics, behavior, and identity science","volume":"7 1","pages":"61-70"},"PeriodicalIF":0.0,"publicationDate":"2024-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142890272","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CoNAN: Conditional Neural Aggregation Network for Unconstrained Long Range Biometric Feature Fusion CoNAN:用于无约束远距离生物特征融合的条件神经聚合网络
IEEE transactions on biometrics, behavior, and identity science Pub Date : 2024-06-06 DOI: 10.1109/TBIOM.2024.3410311
Bhavin Jawade;Deen Dayal Mohan;Prajwal Shetty;Dennis Fedorishin;Srirangaraj Setlur;Venu Govindaraju
{"title":"CoNAN: Conditional Neural Aggregation Network for Unconstrained Long Range Biometric Feature Fusion","authors":"Bhavin Jawade;Deen Dayal Mohan;Prajwal Shetty;Dennis Fedorishin;Srirangaraj Setlur;Venu Govindaraju","doi":"10.1109/TBIOM.2024.3410311","DOIUrl":"https://doi.org/10.1109/TBIOM.2024.3410311","url":null,"abstract":"Person recognition from image sets acquired under unregulated and uncontrolled settings, such as at large distances, low resolutions, varying viewpoints, illumination, pose, and atmospheric conditions, is challenging. Feature aggregation, which involves aggregating a set of N feature representations present in a template into a single global representation, plays a pivotal role in such recognition systems. Existing works in traditional face feature aggregation either utilize metadata or high-dimensional intermediate feature representations to estimate feature quality for aggregation. However, generating high-quality metadata or style information is not feasible for extremely low-resolution faces captured in long-range and high altitude settings. To overcome these limitations, we propose a feature distribution conditioning approach called CoNAN for template aggregation. Specifically, our method aims to learn a context vector conditioned over the distribution information of the incoming feature set, which is utilized to weigh the features based on their estimated informativeness. The proposed method produces state-of-the-art results on long-range unconstrained face recognition datasets such as BTS, and DroneSURF, validating the advantages of such an aggregation strategy. We show that CoNAN generalizes present CoNAN’s results on other modalities such as body features and gait. We also produce extensive qualitative and quantitative experiments on different components of CoNAN.","PeriodicalId":73307,"journal":{"name":"IEEE transactions on biometrics, behavior, and identity science","volume":"6 4","pages":"602-612"},"PeriodicalIF":0.0,"publicationDate":"2024-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142713949","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Posture and Body Movement Effects on Behavioral Biometrics for Continuous Smartphone Authentication 姿势和身体运动对智能手机连续认证行为生物识别的影响
IEEE transactions on biometrics, behavior, and identity science Pub Date : 2024-06-04 DOI: 10.1109/TBIOM.2024.3409349
Nicholas Cariello;Robert Eslinger;Rosemary Gallagher;Isaac Kurtzer;Paolo Gasti;Kiran S. Balagani
{"title":"Posture and Body Movement Effects on Behavioral Biometrics for Continuous Smartphone Authentication","authors":"Nicholas Cariello;Robert Eslinger;Rosemary Gallagher;Isaac Kurtzer;Paolo Gasti;Kiran S. Balagani","doi":"10.1109/TBIOM.2024.3409349","DOIUrl":"https://doi.org/10.1109/TBIOM.2024.3409349","url":null,"abstract":"Continuous authentication aims to authenticate users at regular intervals post-login, typically using biometric features that capture the user’s behavior. One of the drawbacks of continuous authentication is that it usually introduces a high authentication latency, i.e., behavioral features need to be captured for 45–120 seconds in order to achieve acceptable authentication error rates. In this paper, we take a step towards addressing this problem by harnessing 3D motion capture data and creating an extensive set of body motion and posture features with the goal of achieving low authentication error rates with short (1–5 second) authentication latencies. To evaluate our features, we collected a dataset from 39 users engaged in a set of smartphone tasks performed in a 3D motion capture studio. To collect our data, we placed 41 IR-reflective markers on the subjects’ body and 3 on the smartphone. The markers were tracked by 3D motion capture cameras. During data collection, subjects were either walking along a pre-determined path or sitting. We show that our features can lead to a low equal error rate (EER) of 6.4% with 1-second latency, and 5.4% with 5-second latency. In contrast, under the same experimental settings, swipe and phone-movement features alone led to an EER of 15.7% for a 60-second authentication latency. While our features demonstrate the potential to achieve low authentication error with very low authentication latencies, we envision that in practice these features will be collected using standard smartphone sensors and consumer-grade wearable devices. We believe that our results hold transformative potential, because they shift continuous authentication from a reactive (i.e., detection is successfully performed well into the attack) to a proactive security measure (i.e., detection happens as the attack starts). As part of our contributions, we have made the dataset used in this paper publicly available.","PeriodicalId":73307,"journal":{"name":"IEEE transactions on biometrics, behavior, and identity science","volume":"7 1","pages":"3-15"},"PeriodicalIF":0.0,"publicationDate":"2024-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142890132","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信