IEEE transactions on biometrics, behavior, and identity science最新文献

筛选
英文 中文
Adversarial Samples Generated by Self-Forgery for Face Forgery Detection 基于自伪造的对抗样本人脸伪造检测
IEEE transactions on biometrics, behavior, and identity science Pub Date : 2025-01-13 DOI: 10.1109/TBIOM.2025.3529026
Hanxian Duan;Qian Jiang;Xiaoyuan Xu;Yu Wang;Huasong Yi;Shaowen Yao;Xin Jin
{"title":"Adversarial Samples Generated by Self-Forgery for Face Forgery Detection","authors":"Hanxian Duan;Qian Jiang;Xiaoyuan Xu;Yu Wang;Huasong Yi;Shaowen Yao;Xin Jin","doi":"10.1109/TBIOM.2025.3529026","DOIUrl":"https://doi.org/10.1109/TBIOM.2025.3529026","url":null,"abstract":"As deep learning techniques continue to advance making face synthesis realistic and indistinguishable. Algorithms need to be continuously improved to cope with increasingly sophisticated forgery techniques. Current face forgery detectors achieve excellent results when detecting training and testing from the same dataset. However, the detector performance degrades when generalized to unknown forgery methods. One of the most effective ways to address this problem is to train the model using synthetic data. This helps the model learn a generic representation for deep forgery detection. In this article, we propose a new strategy for synthesis of training data. To improve the quality and sensitivity to forgeries, we include a Multi-scale Feature Aggregation Module and a Forgery Identification Module in the generator and discriminator. The Multi-scale Feature Aggregation Module captures finer details and textures while reducing forgery traces. The Forgery Identification Module more acutely detects traces and irregularities in the forgery images. It can better distinguish between real and fake images and improve overall detection accuracy. In addition, we employ an adversarial training strategy to dynamically construct the detector. This effectively explores the enhancement space of forgery samples. Through extensive experiments, we demonstrate the effectiveness of the proposed synthesis strategy. Our code can be found at: <uri>https://github.com/1241128239/ASG-SF</uri>.","PeriodicalId":73307,"journal":{"name":"IEEE transactions on biometrics, behavior, and identity science","volume":"7 3","pages":"432-443"},"PeriodicalIF":0.0,"publicationDate":"2025-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144492396","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AIM-Bone: Texture Discrepancy Generation and Localization for Generalized Deepfake Detection AIM-Bone:用于广义深度伪造检测的纹理差异生成与定位
IEEE transactions on biometrics, behavior, and identity science Pub Date : 2025-01-06 DOI: 10.1109/TBIOM.2025.3526655
Boyuan Liu;Xin Zhang;Hefei Ling;Zongyi Li;Runsheng Wang;Hanyuan Zhang;Ping Li
{"title":"AIM-Bone: Texture Discrepancy Generation and Localization for Generalized Deepfake Detection","authors":"Boyuan Liu;Xin Zhang;Hefei Ling;Zongyi Li;Runsheng Wang;Hanyuan Zhang;Ping Li","doi":"10.1109/TBIOM.2025.3526655","DOIUrl":"https://doi.org/10.1109/TBIOM.2025.3526655","url":null,"abstract":"Deep synthesis multimedia content, especially human face manipulation poses a risk of visual and auditory confusion, highlighting the call for generalized face forgery detection methods. In this paper, we propose a novel method for fake sample synthesis, along with a dual auto-encoder network for generalized deepfake detection. First, we delve into the texture discrepancy between tampered and unperturbed regions within forged images and impose models to learn such features by adopting Augmentation Inside Masks (AIM). It is capable of sabotaging the texture consistency within a single real image and generating textures that are commonly seen in fake images. It is realized by exhibiting forgery clues of discrepancy in noise patterns, colors, resolutions, and especially the existence of GAN (Generative Adversarial Network) features, including GAN textures, deconvolution traces, GAN distribution, etc. To the best of our knowledge, this work is the first to incorporate GAN features in fake sample synthesizing. The second is that we design a Bone-shaped dual auto-encoder with a powerful image texture filter bridged in between to aid forgery detection and localization in two streams. Reconstruction learning in the color stream avoids over-fitting in specific textures and imposes learning color-related features. Moreover, the GAN fingerprints harbored within the output image can be in furtherance of AIM and produce texture-discrepant samples for further training. The noise stream takes input processed by the proposed texture filter to focus on noise perspective and predict forgery region localization, subjecting to the constraint of mask label produced by AIM. We conduct extensive experiments on multiple benchmark datasets and the superior performance has proven the effectiveness of AIM-Bone and its advantage against current state-of-the-art methods. Our source code is available at <monospace><uri>https://github.com/heart74/AIM-Bone.git</uri></monospace>.","PeriodicalId":73307,"journal":{"name":"IEEE transactions on biometrics, behavior, and identity science","volume":"7 3","pages":"422-431"},"PeriodicalIF":0.0,"publicationDate":"2025-01-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144492398","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Unbiased-Diff: Analyzing and Mitigating Biases in Diffusion Model-Based Face Image Generation 无偏差分:基于扩散模型的人脸图像生成中的偏差分析与缓解
IEEE transactions on biometrics, behavior, and identity science Pub Date : 2025-01-02 DOI: 10.1109/TBIOM.2024.3525037
Malsha V. Perera;Vishal M. Patel
{"title":"Unbiased-Diff: Analyzing and Mitigating Biases in Diffusion Model-Based Face Image Generation","authors":"Malsha V. Perera;Vishal M. Patel","doi":"10.1109/TBIOM.2024.3525037","DOIUrl":"https://doi.org/10.1109/TBIOM.2024.3525037","url":null,"abstract":"Diffusion-based generative models have become increasingly popular in applications such as synthetic data generation and image editing, due to their ability to generate realistic, high-quality images. However, these models can exacerbate existing social biases, particularly regarding attributes like gender and race, potentially impacting downstream applications. In this paper, we analyze the presence of social biases in diffusion-based face generations and propose a novel sampling process guidance algorithm to mitigate these biases. Specifically, during the diffusion sampling process, we guide the generation to produce samples with attribute distributions that align with a balanced or desired attribute distribution. Our experiments demonstrate that diffusion models exhibit biases across multiple datasets in terms of gender and race. Moreover, our proposed method effectively mitigates these biases, making diffusion-based face generation more fair and inclusive.","PeriodicalId":73307,"journal":{"name":"IEEE transactions on biometrics, behavior, and identity science","volume":"7 3","pages":"384-395"},"PeriodicalIF":0.0,"publicationDate":"2025-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144492493","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
IEEE Transactions on Biometrics, Behavior, and Identity Science Information for Authors IEEE生物识别、行为和身份科学信息作者汇刊
IEEE transactions on biometrics, behavior, and identity science Pub Date : 2024-12-27 DOI: 10.1109/TBIOM.2024.3513762
{"title":"IEEE Transactions on Biometrics, Behavior, and Identity Science Information for Authors","authors":"","doi":"10.1109/TBIOM.2024.3513762","DOIUrl":"https://doi.org/10.1109/TBIOM.2024.3513762","url":null,"abstract":"","PeriodicalId":73307,"journal":{"name":"IEEE transactions on biometrics, behavior, and identity science","volume":"7 1","pages":"C3-C3"},"PeriodicalIF":0.0,"publicationDate":"2024-12-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10816732","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142890362","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
IEEE Transactions on Biometrics, Behavior, and Identity Science Publication Information IEEE生物计量学、行为与身份科学学报
IEEE transactions on biometrics, behavior, and identity science Pub Date : 2024-12-27 DOI: 10.1109/TBIOM.2024.3513761
{"title":"IEEE Transactions on Biometrics, Behavior, and Identity Science Publication Information","authors":"","doi":"10.1109/TBIOM.2024.3513761","DOIUrl":"https://doi.org/10.1109/TBIOM.2024.3513761","url":null,"abstract":"","PeriodicalId":73307,"journal":{"name":"IEEE transactions on biometrics, behavior, and identity science","volume":"7 1","pages":"C2-C2"},"PeriodicalIF":0.0,"publicationDate":"2024-12-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10816704","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142890360","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
IEEE T-BIOM Editorial Board Changes IEEE T-BIOM编辑委员会的变化
IEEE transactions on biometrics, behavior, and identity science Pub Date : 2024-12-27 DOI: 10.1109/TBIOM.2024.3515292
Naser Damer;Weihong Deng;Jianjiang Feng;Vishal M. Patel;Ajita Rattani;Mark Nixon
{"title":"IEEE T-BIOM Editorial Board Changes","authors":"Naser Damer;Weihong Deng;Jianjiang Feng;Vishal M. Patel;Ajita Rattani;Mark Nixon","doi":"10.1109/TBIOM.2024.3515292","DOIUrl":"https://doi.org/10.1109/TBIOM.2024.3515292","url":null,"abstract":"","PeriodicalId":73307,"journal":{"name":"IEEE transactions on biometrics, behavior, and identity science","volume":"7 1","pages":"1-2"},"PeriodicalIF":0.0,"publicationDate":"2024-12-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10816705","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142890131","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
BIAS: A Body-Based Interpretable Active Speaker Approach 偏见:基于肢体可译的主动说话者方法
IEEE transactions on biometrics, behavior, and identity science Pub Date : 2024-12-18 DOI: 10.1109/TBIOM.2024.3520030
Tiago Roxo;Joana Cabral Costa;Pedro R. M. Inácio;Hugo Proença
{"title":"BIAS: A Body-Based Interpretable Active Speaker Approach","authors":"Tiago Roxo;Joana Cabral Costa;Pedro R. M. Inácio;Hugo Proença","doi":"10.1109/TBIOM.2024.3520030","DOIUrl":"https://doi.org/10.1109/TBIOM.2024.3520030","url":null,"abstract":"State-of-the-art Active Speaker Detection (ASD) approaches heavily rely on audio and facial features to perform, which is not a sustainable approach in wild scenarios. Although these methods achieve good results in the standard AVA-ActiveSpeaker set, a recent wilder ASD dataset (WASD) showed the limitations of such models and raised the need for new approaches. As such, we propose BIAS, a model that, for the first time, combines audio, face, and body information, to accurately predict active speakers in varying/challenging conditions. Additionally, we design BIAS to provide interpretability by proposing a novel use for Squeeze-and-Excitation blocks, namely in attention heatmaps creation and feature importance assessment. For a full interpretability setup, we annotate an ASD-related actions dataset (ASD-Text) to finetune a ViT-GPT2 for text scene description to complement BIAS interpretability. The results show that BIAS is state-of-the-art in challenging conditions where body-based features are of utmost importance (Columbia, open-settings, and WASD), and yields competitive results in AVA-ActiveSpeaker, where face is more influential than body for ASD. BIAS interpretability also shows the features/aspects more relevant towards ASD prediction in varying settings, making it a strong baseline for further developments in interpretable ASD models, and is available at <uri>https://github.com/Tiago-Roxo/BIAS</uri>.","PeriodicalId":73307,"journal":{"name":"IEEE transactions on biometrics, behavior, and identity science","volume":"7 3","pages":"410-421"},"PeriodicalIF":0.0,"publicationDate":"2024-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144492434","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Inpainting Diffusion Synthetic and Data Augment With Feature Keypoints for Tiny Partial Fingerprints 基于特征关键点的微小部分指纹涂漆扩散合成与数据增强
IEEE transactions on biometrics, behavior, and identity science Pub Date : 2024-12-13 DOI: 10.1109/TBIOM.2024.3517330
Mao-Hsiu Hsu;Yung-Ching Hsu;Ching-Te Chiu
{"title":"Inpainting Diffusion Synthetic and Data Augment With Feature Keypoints for Tiny Partial Fingerprints","authors":"Mao-Hsiu Hsu;Yung-Ching Hsu;Ching-Te Chiu","doi":"10.1109/TBIOM.2024.3517330","DOIUrl":"https://doi.org/10.1109/TBIOM.2024.3517330","url":null,"abstract":"The advancement of fingerprint research within public academic circles has been trailing behind facial recognition, primarily due to the scarcity of extensive publicly available datasets, despite fingerprints being widely used across various domains. Recent progress has seen the application of deep learning techniques to synthesize fingerprints, predominantly focusing on large-area fingerprints within existing datasets. However, with the emergence of AIoT and edge devices, the importance of tiny partial fingerprints has been underscored for their faster and more cost-effective properties. Yet, there remains a lack of publicly accessible datasets for such fingerprints. To address this issue, we introduce publicly available datasets tailored for tiny partial fingerprints. Using advanced generative deep learning, we pioneer diffusion methods for fingerprint synthesis. By combining random sampling with inpainting diffusion guided by feature keypoints masks, we enhance data augmentation while preserving key features, achieving up to 99.1% recognition matching rate. To demonstrate the usefulness of our fingerprint images generated using our approach, we conducted experiments involving model training for various tasks, including denoising, deblurring, and deep forgery detection. The results showed that models trained with our generated datasets outperformed those trained without our datasets or with other synthetic datasets. This indicates that our approach not only produces diverse fingerprints but also improves the model’s generalization capabilities. Furthermore, our approach ensures confidentiality without compromise by partially transforming randomly sampled synthetic fingerprints, which reduces the likelihood of real fingerprints being leaked. The total number of generated fingerprints published in this article amounts to 818,077. Moving forward, we are ongoing updates and releases to contribute to the advancement of the tiny partial fingerprint field. The code and our generated tiny partial fingerprint dataset can be accessed at <uri>https://github.com/Hsu0623/Inpainting-Diffusion-Synthetic-and-Data-Augment-with-Feature-Keypoints-for-Tiny-Partial-Fingerprints.git</uri>","PeriodicalId":73307,"journal":{"name":"IEEE transactions on biometrics, behavior, and identity science","volume":"7 3","pages":"396-409"},"PeriodicalIF":0.0,"publicationDate":"2024-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144492305","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Deep CNN-Based Feature Extraction and Matching of Pores for Fingerprint Recognition 基于深度cnn的指纹孔特征提取与匹配
IEEE transactions on biometrics, behavior, and identity science Pub Date : 2024-12-12 DOI: 10.1109/TBIOM.2024.3516634
Mohammed Ali;Chunyan Wang;M. Omair Ahmad
{"title":"A Deep CNN-Based Feature Extraction and Matching of Pores for Fingerprint Recognition","authors":"Mohammed Ali;Chunyan Wang;M. Omair Ahmad","doi":"10.1109/TBIOM.2024.3516634","DOIUrl":"https://doi.org/10.1109/TBIOM.2024.3516634","url":null,"abstract":"The inherent characteristics of fingerprint pores, including their immutability, permanence, and uniqueness in terms of size, shape, and position along ridges, make them suitable candidates for fingerprint recognition. In contrast to only a limited number of other landmarks in a fingerprint, such as minutia, the presence of a large number of pores even in a small fingerprint segment is a very attractive characteristic of pores for fingerprint recognition. A pore-based fingerprint recognition system has two main modules: a pore detection module and a pore feature extraction and matching module. The focus of this paper is on the latter module, in which the features of the detected pores in a query fingerprint are extracted, uniquely represented and then used for matching these pores with those in a template fingerprint. Fingerprint recognition systems that use convolutional neural networks (CNNs) in the design of this module have automatic feature extraction capabilities. However, CNNs used in these modules have inadequate capability of capturing deep-level features. Moreover, the pore matching part of these modules heavily relies only on the Euclidean distance metric, which if used alone, may not provide an accurate measure of similarity between the pores. In this paper, a novel pore feature extraction and matching module is presented in which a CNN architecture is proposed to generate highly representational and discriminative hierarchical features and a balance between the performance and complexity is achieved by using depthwise and depthwise separable convolutions. Furthermore, an accurate composite metric, encompassing the Euclidean distance, angle, and magnitudes difference between the vectors of pore representations, is introduced to measure the similarity between the pores of the query and template fingerprint images. Extensive experimentation is carried out to demonstrate the effectiveness of the proposed scheme in terms of performance and complexity, and its superiority over the existing state-of-the-art pore-based fingerprint recognition systems.","PeriodicalId":73307,"journal":{"name":"IEEE transactions on biometrics, behavior, and identity science","volume":"7 3","pages":"368-383"},"PeriodicalIF":0.0,"publicationDate":"2024-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144492397","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Privacy-Preserving Face Recognition and Verification With Lensless Camera 无镜头相机的隐私保护人脸识别与验证
IEEE transactions on biometrics, behavior, and identity science Pub Date : 2024-12-11 DOI: 10.1109/TBIOM.2024.3515144
Chris Henry;M. Salman Asif;Zhu Li
{"title":"Privacy-Preserving Face Recognition and Verification With Lensless Camera","authors":"Chris Henry;M. Salman Asif;Zhu Li","doi":"10.1109/TBIOM.2024.3515144","DOIUrl":"https://doi.org/10.1109/TBIOM.2024.3515144","url":null,"abstract":"Facial recognition technology is becoming increasingly ubiquitous nowadays. Facial recognition systems rely upon large amounts of facial image data. This raises serious privacy concerns since storing this facial data securely is challenging given the constant risk of data breaches or hacking. This paper proposes a privacy-preserving face recognition and verification system that works without compromising the user’s privacy. It utilizes sensor measurements captured by a lensless camera - FlatCam. These sensor measurements are visually unintelligible, preserving the user’s privacy. Our solution works without the knowledge of the camera sensor’s Point Spread Function and does not require image reconstruction at any stage. In order to perform face recognition without information on face images, we propose a Discrete Cosine Transform (DCT) domain sensor measurement learning scheme that can recognize faces without revealing face images. We compute a frequency domain representation by computing the DCT of the sensor measurement at multiple resolutions and then splitting the result into multiple subbands. The network trained using this DCT representation results in huge accuracy gains compared to the accuracy obtained after directly training with sensor measurement. In addition, we further enhance the security of the system by introducing pseudo-random noise at random DCT coefficient locations as a secret key in the proposed DCT representation. It is virtually impossible to recover the face images from the DCT representation without the knowledge of the camera parameters and the noise locations. We evaluated the proposed system on a real lensless camera dataset - the FlatCam Face dataset. Experimental results demonstrate the system is highly secure and can achieve a recognition accuracy of 93.97% while maintaining strong user privacy.","PeriodicalId":73307,"journal":{"name":"IEEE transactions on biometrics, behavior, and identity science","volume":"7 3","pages":"354-367"},"PeriodicalIF":0.0,"publicationDate":"2024-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144492435","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信