IEEE transactions on biometrics, behavior, and identity science最新文献

筛选
英文 中文
Spatio-Temporal Dual-Attention Transformer for Time-Series Behavioral Biometrics 用于时序行为生物识别的时空双注意变换器
IEEE transactions on biometrics, behavior, and identity science Pub Date : 2024-04-29 DOI: 10.1109/TBIOM.2024.3394875
Kim-Ngan Nguyen;Sanka Rasnayaka;Sandareka Wickramanayake;Dulani Meedeniya;Sanjay Saha;Terence Sim
{"title":"Spatio-Temporal Dual-Attention Transformer for Time-Series Behavioral Biometrics","authors":"Kim-Ngan Nguyen;Sanka Rasnayaka;Sandareka Wickramanayake;Dulani Meedeniya;Sanjay Saha;Terence Sim","doi":"10.1109/TBIOM.2024.3394875","DOIUrl":"https://doi.org/10.1109/TBIOM.2024.3394875","url":null,"abstract":"Continuous Authentication (CA) using behavioral biometrics is a type of biometric identification that recognizes individuals based on their unique behavioral characteristics. Many behavioral biometrics can be captured through multiple sensors, each providing multichannel time-series data. Utilizing this multichannel data effectively can enhance the accuracy of behavioral biometrics-based CA. This paper extends BehaveFormer, a new framework that effectively combines time series data from multiple sensors to provide higher security in behavioral biometrics. BehaveFormer includes two Spatio-Temporal Dual Attention Transformers (STDAT), a novel transformer we introduce to extract more discriminative features from multichannel time-series data. Experimental results on two behavioral biometrics, Keystroke Dynamics and Swipe Dynamics with Inertial Measurement Unit (IMU), have shown State-of-the-art performance. For Keystroke, on three publicly available datasets (Aalto DB, HMOG DB, and HuMIdb), BehaveFormer outperforms the SOTA. For instance, BehaveFormer achieved an EER of 2.95% on the HuMIdb. For Swipe, on two publicly available datasets (HuMIdb and FETA) BehaveFormer outperforms the SOTA, for instance, BehaveFormer achieved an EER of 3.67% on the HuMIdb. Additionally, the BehaveFormer model shows superior performance in various CA-specific evaluation metrics. The proposed STDAT-based BehaveFormer architecture can also be effectively used for transfer learning. The model weights and reproducible experimental results are available at: \u0000<uri>https://github.com/nganntk/BehaveFormer</uri>","PeriodicalId":73307,"journal":{"name":"IEEE transactions on biometrics, behavior, and identity science","volume":"6 4","pages":"591-601"},"PeriodicalIF":0.0,"publicationDate":"2024-04-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142713946","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Template Inversion Attack Using Synthetic Face Images Against Real Face Recognition Systems 利用合成人脸图像的模板反转攻击对抗真实人脸识别系统
IEEE transactions on biometrics, behavior, and identity science Pub Date : 2024-04-22 DOI: 10.1109/TBIOM.2024.3391759
Hatef Otroshi Shahreza;Sébastien Marcel
{"title":"Template Inversion Attack Using Synthetic Face Images Against Real Face Recognition Systems","authors":"Hatef Otroshi Shahreza;Sébastien Marcel","doi":"10.1109/TBIOM.2024.3391759","DOIUrl":"https://doi.org/10.1109/TBIOM.2024.3391759","url":null,"abstract":"In this paper, we use synthetic data and propose a new method for template inversion attacks against face recognition systems. We use synthetic data to train a face reconstruction model to generate high-resolution (i.e., \u0000<inline-formula> <tex-math>$1024times 1024$ </tex-math></inline-formula>\u0000) face images from facial templates. To this end, we use a face generator network to generate synthetic face images and extract their facial templates using the face recognition model as our training set. Then, we use the synthesized dataset to learn a mapping from facial templates to the intermediate latent space of the same face generator network. We propose our method for both whitebox and blackbox TI attacks. Our experiments show that the trained model with synthetic data can be used to reconstruct face images from templates extracted from real face images. In our experiments, we compare our method with previous methods in the literature in attacks against different state-of-the-art face recognition models on four different face datasets, including the MOBIO, LFW, AgeDB, and IJB-C datasets, demonstrating the effectiveness of our proposed method on real face recognition datasets. Experimental results show our method outperforms previous methods on high-resolution 2D face reconstruction from facial templates and achieve competitive results with SOTA face reconstruction methods. Furthermore, we conduct practical presentation attacks using the generated face images in digital replay attacks against real face recognition systems, showing the vulnerability of face recognition systems to presentation attacks based on our TI attack (with synthetic train data) on real face datasets.","PeriodicalId":73307,"journal":{"name":"IEEE transactions on biometrics, behavior, and identity science","volume":"6 3","pages":"374-384"},"PeriodicalIF":0.0,"publicationDate":"2024-04-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141725606","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Identity-Aware Facial Age Editing Using Latent Diffusion 利用潜在扩散进行身份识别面部年龄编辑
IEEE transactions on biometrics, behavior, and identity science Pub Date : 2024-04-18 DOI: 10.1109/TBIOM.2024.3390570
Sudipta Banerjee;Govind Mittal;Ameya Joshi;Sai Pranaswi Mullangi;Chinmay Hegde;Nasir Memon
{"title":"Identity-Aware Facial Age Editing Using Latent Diffusion","authors":"Sudipta Banerjee;Govind Mittal;Ameya Joshi;Sai Pranaswi Mullangi;Chinmay Hegde;Nasir Memon","doi":"10.1109/TBIOM.2024.3390570","DOIUrl":"https://doi.org/10.1109/TBIOM.2024.3390570","url":null,"abstract":"Aging in face images is a type of intra-class variation that has a stronger impact on the performance of biometric recognition systems than other modalities (such as iris scans and fingerprints). Improving the robustness of automated face recognition systems with respect to aging requires high quality longitudinal datasets that should contain images belonging to a large number of individuals collected across a long time span, ideally decades apart. Unfortunately, there is a dearth of such good operational quality longitudinal datasets. Synthesizing longitudinal data that meet these requirements can be achieved using modern generative models. However, these tools may produce unrealistic artifacts or compromise the biometric quality of the age-edited images. In this work, we simulate facial aging and de-aging by leveraging text-to-image diffusion models with the aid of few-shot fine-tuning and intuitive textual prompting. Our method is supervised using identity-preserving loss functions that ensure biometric utility preservation while imparting a high degree of visual realism. We ablate our method using different datasets, state-of-the art face matchers and age classification networks. Our empirical analysis validates the success of the proposed method compared to existing schemes. Our code is available at \u0000<uri>https://github.com/sudban3089/ID-Preserving-Facial-Aging.git</uri>","PeriodicalId":73307,"journal":{"name":"IEEE transactions on biometrics, behavior, and identity science","volume":"6 4","pages":"443-457"},"PeriodicalIF":0.0,"publicationDate":"2024-04-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142713950","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
GaitSTR: Gait Recognition With Sequential Two-Stream Refinement GaitSTR:利用顺序双流细化进行步态识别
IEEE transactions on biometrics, behavior, and identity science Pub Date : 2024-04-17 DOI: 10.1109/TBIOM.2024.3390626
Wanrong Zheng;Haidong Zhu;Zhaoheng Zheng;Ram Nevatia
{"title":"GaitSTR: Gait Recognition With Sequential Two-Stream Refinement","authors":"Wanrong Zheng;Haidong Zhu;Zhaoheng Zheng;Ram Nevatia","doi":"10.1109/TBIOM.2024.3390626","DOIUrl":"10.1109/TBIOM.2024.3390626","url":null,"abstract":"Gait recognition aims to identify a person based on their walking sequences, serving as a useful biometric modality as it can be observed from long distances without requiring cooperation from the subject. In representing a person’s walking sequence, silhouettes and skeletons are the two primary modalities used. Silhouette sequences lack detailed part information when overlapping occurs between different body segments and are affected by carried objects and clothing. Skeletons, comprising joints and bones connecting the joints, provide more accurate part information for different segments; however, they are sensitive to occlusions and low-quality images, causing inconsistencies in frame-wise results within a sequence. In this paper, we explore the use of a two-stream representation of skeletons for gait recognition, alongside silhouettes. By fusing the combined data of silhouettes and skeletons, we refine the two-stream skeletons, joints, and bones through self-correction in graph convolution, along with cross-modal correction with temporal consistency from silhouettes. We demonstrate that with refined skeletons, the performance of the gait recognition model can achieve further improvement on public gait recognition datasets compared with state-of-the-art methods without extra annotations.","PeriodicalId":73307,"journal":{"name":"IEEE transactions on biometrics, behavior, and identity science","volume":"6 4","pages":"528-538"},"PeriodicalIF":0.0,"publicationDate":"2024-04-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140752555","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Gender Privacy Angular Constraints for Face Recognition 人脸识别中的性别隐私角度约束
IEEE transactions on biometrics, behavior, and identity science Pub Date : 2024-04-17 DOI: 10.1109/TBIOM.2024.3390586
Zohra Rezgui;Nicola Strisciuglio;Raymond Veldhuis
{"title":"Gender Privacy Angular Constraints for Face Recognition","authors":"Zohra Rezgui;Nicola Strisciuglio;Raymond Veldhuis","doi":"10.1109/TBIOM.2024.3390586","DOIUrl":"https://doi.org/10.1109/TBIOM.2024.3390586","url":null,"abstract":"Deep learning-based face recognition systems produce templates that encode sensitive information next to identity, such as gender and ethnicity. This poses legal and ethical problems as the collection of biometric data should be minimized and only specific to a designated task. We propose two privacy constraints to hide the gender attribute that can be added to a recognition loss. The first constraint relies on the minimization of the angle between gender-centroid embeddings. The second constraint relies on the minimization of the angle between gender specific embeddings and their opposing gender-centroid weight vectors. Both constraints enforce the overlapping of the gender specific distributions of the embeddings. Furthermore, they have a direct interpretation in the embedding space and do not require a large number of trainable parameters as two fully connected layers are sufficient to achieve satisfactory results. We also provide extensive evaluation results across several datasets and face recognition networks, and we compare our method to three state-of-the-art methods. Our method is capable of maintaining high verification performances while significantly improving privacy in a cross-database setting, without increasing the computational load for template comparison. We also show that different training data can result in varying levels of effectiveness of privacy-enhancing methods that implement data minimization.","PeriodicalId":73307,"journal":{"name":"IEEE transactions on biometrics, behavior, and identity science","volume":"6 3","pages":"352-363"},"PeriodicalIF":0.0,"publicationDate":"2024-04-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10504554","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141725585","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Trans-FD: Transformer-Based Representation Interaction for Face De-Morphing Trans-FD:基于变换器的人脸去变形表征交互
IEEE transactions on biometrics, behavior, and identity science Pub Date : 2024-04-16 DOI: 10.1109/TBIOM.2024.3390056
Min Long;Qiangqiang Duan;Le-Bing Zhang;Fei Peng;Dengyong Zhang
{"title":"Trans-FD: Transformer-Based Representation Interaction for Face De-Morphing","authors":"Min Long;Qiangqiang Duan;Le-Bing Zhang;Fei Peng;Dengyong Zhang","doi":"10.1109/TBIOM.2024.3390056","DOIUrl":"https://doi.org/10.1109/TBIOM.2024.3390056","url":null,"abstract":"Face morphing attacks aim to deceive face recognition systems by using a facial image that contains multiple biometric information. It has been demonstrated to pose a significant threat to commercial face recognition systems and human experts. Although a large number of face morphing detection methods have been proposed in recent years to enhance the security of face recognition systems, little attention has been paid to restoring the identity of the accomplice from a morphed image. In this paper, Trans-FD, a novel model that uses Transformer representation interaction to restore the identity of the accomplice, is proposed. To effectively separate the identity of an accomplice, Trans-FD applies Transformer to perform representation interaction in the separation network. Additionally, it utilizes CNN encoders to extract multi-scale features, and it establishes skip connections between the encoder and generator through the Transformer-based separation network to provide detailed information for the generator. Experiments demonstrate that Trans-FD can effectively restore the accomplice’s face and outperforms previous works in terms of restoration accuracy and image quality.","PeriodicalId":73307,"journal":{"name":"IEEE transactions on biometrics, behavior, and identity science","volume":"6 3","pages":"385-397"},"PeriodicalIF":0.0,"publicationDate":"2024-04-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141725526","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Face Super-Resolution Quality Assessment Based on Identity and Recognizability 基于身份和可识别性的人脸超分辨率质量评估
IEEE transactions on biometrics, behavior, and identity science Pub Date : 2024-04-16 DOI: 10.1109/TBIOM.2024.3389982
Weiling Chen;Weitao Lin;Xiaoyi Xu;Liqun Lin;Tiesong Zhao
{"title":"Face Super-Resolution Quality Assessment Based on Identity and Recognizability","authors":"Weiling Chen;Weitao Lin;Xiaoyi Xu;Liqun Lin;Tiesong Zhao","doi":"10.1109/TBIOM.2024.3389982","DOIUrl":"https://doi.org/10.1109/TBIOM.2024.3389982","url":null,"abstract":"Face Super-Resolution (FSR) plays a crucial role in enhancing low-resolution face images, which is essential for various face-related tasks. However, FSR may alter individuals’ identities or introduce artifacts that affect recognizability. This problem has not been well assessed by existing Image Quality Assessment (IQA) methods. In this paper, we present both subjective and objective evaluations for FSR-IQA, resulting in a benchmark dataset and a reduced reference quality metrics, respectively. First, we incorporate a novel criterion of identity preservation and recognizability to develop our Face Super-resolution Quality Dataset (FSQD). Second, we analyze the correlation between identity preservation and recognizability, and investigate effective feature extractions for both of them. Third, we propose a training-free IQA framework called Face Identity and Recognizability Evaluation of Super-resolution (FIRES). Experimental results using FSQD demonstrate that FIRES achieves competitive performance.","PeriodicalId":73307,"journal":{"name":"IEEE transactions on biometrics, behavior, and identity science","volume":"6 3","pages":"364-373"},"PeriodicalIF":0.0,"publicationDate":"2024-04-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141725601","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
IEEE Transactions on Biometrics, Behavior, and Identity Science Publication Information 电气和电子工程师学会生物统计、行为和身份科学期刊》(IEEE Transactions on Biometrics, Behavior, and Identity Science)出版信息
IEEE transactions on biometrics, behavior, and identity science Pub Date : 2024-04-03 DOI: 10.1109/TBIOM.2024.3378798
{"title":"IEEE Transactions on Biometrics, Behavior, and Identity Science Publication Information","authors":"","doi":"10.1109/TBIOM.2024.3378798","DOIUrl":"https://doi.org/10.1109/TBIOM.2024.3378798","url":null,"abstract":"","PeriodicalId":73307,"journal":{"name":"IEEE transactions on biometrics, behavior, and identity science","volume":"6 2","pages":"C2-C2"},"PeriodicalIF":0.0,"publicationDate":"2024-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10490304","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140345473","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
IEEE Transactions on Biometrics, Behavior, and Identity Science Information for Authors IEEE Transactions on Biometrics, Behavior, and Identity Science 给作者的信息
IEEE transactions on biometrics, behavior, and identity science Pub Date : 2024-04-03 DOI: 10.1109/TBIOM.2024.3378799
{"title":"IEEE Transactions on Biometrics, Behavior, and Identity Science Information for Authors","authors":"","doi":"10.1109/TBIOM.2024.3378799","DOIUrl":"https://doi.org/10.1109/TBIOM.2024.3378799","url":null,"abstract":"","PeriodicalId":73307,"journal":{"name":"IEEE transactions on biometrics, behavior, and identity science","volume":"6 2","pages":"C3-C3"},"PeriodicalIF":0.0,"publicationDate":"2024-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10490307","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140345491","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Multi-Stage Adaptive Feature Fusion Neural Network for Multimodal Gait Recognition 用于多模态步态识别的多阶段自适应特征融合神经网络
IEEE transactions on biometrics, behavior, and identity science Pub Date : 2024-04-03 DOI: 10.1109/TBIOM.2024.3384704
Shinan Zou;Jianbo Xiong;Chao Fan;Chuanfu Shen;Shiqi Yu;Jin Tang
{"title":"A Multi-Stage Adaptive Feature Fusion Neural Network for Multimodal Gait Recognition","authors":"Shinan Zou;Jianbo Xiong;Chao Fan;Chuanfu Shen;Shiqi Yu;Jin Tang","doi":"10.1109/TBIOM.2024.3384704","DOIUrl":"https://doi.org/10.1109/TBIOM.2024.3384704","url":null,"abstract":"Gait recognition is a biometric technology that has received extensive attention. Most existing gait recognition algorithms are unimodal, and a few multimodal gait recognition algorithms perform multimodal fusion only once. None of these algorithms may fully exploit the complementary advantages of the multiple modalities. In this paper, by considering the temporal and spatial characteristics of gait data, we propose a multi-stage feature fusion strategy (MSFFS), which performs multimodal fusions at different stages in the feature extraction process. Also, we propose an adaptive feature fusion module (AFFM) that considers the semantic association between silhouettes and skeletons. The fusion process fuses different silhouette areas with their more related skeleton joints. Since visual appearance changes and time passage co-occur in a gait period, we propose a multiscale spatial-temporal feature extractor (MSSTFE) to learn the spatial-temporal linkage features thoroughly. Specifically, MSSTFE extracts and aggregates spatial-temporal linkages information at different spatial scales. Combining the strategy and modules mentioned above, we propose a multi-stage adaptive feature fusion (MSAFF) neural network, which shows state-of-the-art performance in many experiments on three datasets. Besides, MSAFF is equipped with feature dimensional pooling (FD Pooling), which can significantly reduce the dimension of the gait representations without hindering the accuracy. The code can be found here. \u0000<uri>https://github.com/ShinanZou/MSAFF</uri>\u0000.","PeriodicalId":73307,"journal":{"name":"IEEE transactions on biometrics, behavior, and identity science","volume":"6 4","pages":"539-549"},"PeriodicalIF":0.0,"publicationDate":"2024-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142713947","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信