IET Biometrics最新文献

筛选
英文 中文
Masked Face Recognition Based on ConvNeXt With Coordinate Attention and AdaCos Loss 基于坐标关注和AdaCos损失的ConvNeXt蒙面人脸识别
IF 1.8 4区 计算机科学
IET Biometrics Pub Date : 2025-09-17 DOI: 10.1049/bme2/8878553
Chaoying Tang, Yanbin Cui, Qiaoyue Huang
{"title":"Masked Face Recognition Based on ConvNeXt With Coordinate Attention and AdaCos Loss","authors":"Chaoying Tang,&nbsp;Yanbin Cui,&nbsp;Qiaoyue Huang","doi":"10.1049/bme2/8878553","DOIUrl":"10.1049/bme2/8878553","url":null,"abstract":"<p>Compared with traditional identity authentication methods, biometric recognition technologies offer superior security, reliability, and accuracy, among them, face recognition is of great research value. However, the outbreak of the COVID-19 epidemic, as well as the cold and hazy weather conditions have led to a widespread preference for wearing masks. The masks will cause the failure of traditional face recognition methods. This paper develops a deep learning-based biometric recognition method for masked faces. A simulated masked face (SMF) dataset and a real masked face dataset are constructed first. The images are preprocessed with masked face detection and mask edges are identified to segment the face area. A ConvNeXt network with coordinate attention (CA) is proposed and AdaCos is employed for comprehensive feature extraction. The results of extensive experiments demonstrate a remarkable 99% recognition rate on the SMF dataset and 94% on the real masked face dataset. It shows that our method can effectively process face recognition task with masks, with very high performance in terms of accuracy.</p>","PeriodicalId":48821,"journal":{"name":"IET Biometrics","volume":"2025 1","pages":""},"PeriodicalIF":1.8,"publicationDate":"2025-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/bme2/8878553","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145101430","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ID3: Identity-Driven Learning Based on 3D Reconstruction and Frame-Level Residual Enhancement for Deepfakes Detection 基于三维重建和帧级残差增强的身份驱动学习深度伪造检测
IF 1.8 4区 计算机科学
IET Biometrics Pub Date : 2025-09-12 DOI: 10.1049/bme2/3764746
Hui Ma, Jin Zhang, Xihong Chen, Wenhao Chu, Jiabao Guo, Junze Zheng, Liying Yang, Yanyan Liang
{"title":"ID3: Identity-Driven Learning Based on 3D Reconstruction and Frame-Level Residual Enhancement for Deepfakes Detection","authors":"Hui Ma,&nbsp;Jin Zhang,&nbsp;Xihong Chen,&nbsp;Wenhao Chu,&nbsp;Jiabao Guo,&nbsp;Junze Zheng,&nbsp;Liying Yang,&nbsp;Yanyan Liang","doi":"10.1049/bme2/3764746","DOIUrl":"10.1049/bme2/3764746","url":null,"abstract":"<p>With the rapid advancement of face manipulation technology, various forged videos of celebrities and politicians have appeared and cause pernicious social impact. In this light, forge video detection becomes a research hot spot recently. Most previous detection approaches focus mainly on forgery artifacts caused by the specific generation defects without considering the individual identity information, so that the detection accuracy is not satisfactory. For instance, for a forgery video of a certain celebrity, everyone knows who she/he is, while this important identity clue is not utilized in current detection methods. To address this problem, a novel perspective of face forgery detection via identity-driven learning, named Identity-Driven Deepfakes Detection (ID<sup>3</sup>), is proposed. By the proposed method, the similarity between suspect inputs and the inherent properties (e.g., geometry and appearance) of the same identity is considered and explored. Specifically, by 3D reconstruction, the physical differences between the forged and real videos are captured in the learning process. In addition, with frame level residual enhancement, the detection accuracy can be further improved. The validity of the proposed method is experimentally verified on several benchmark datasets, and our detection performance is better than some state-of-the-art works.</p>","PeriodicalId":48821,"journal":{"name":"IET Biometrics","volume":"2025 1","pages":""},"PeriodicalIF":1.8,"publicationDate":"2025-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/bme2/3764746","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145037594","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A DeepConvLSTM Approach for Continuous Authentication Using Operational System Performance Counters 基于操作系统性能计数器的深度卷积stm连续认证方法
IF 1.8 4区 计算机科学
IET Biometrics Pub Date : 2025-08-26 DOI: 10.1049/bme2/8262252
César H. G. Andrade, Hendrio L. S. Bragança, Horácio Fernandes, Eduardo Feitosa, Eduardo Souto
{"title":"A DeepConvLSTM Approach for Continuous Authentication Using Operational System Performance Counters","authors":"César H. G. Andrade,&nbsp;Hendrio L. S. Bragança,&nbsp;Horácio Fernandes,&nbsp;Eduardo Feitosa,&nbsp;Eduardo Souto","doi":"10.1049/bme2/8262252","DOIUrl":"10.1049/bme2/8262252","url":null,"abstract":"<p>Authentication in personal and corporate computer systems predominantly relies on login and password credentials, which are vulnerable to unauthorized access, especially when genuine users leave their devices unlocked. To address this issue, continuous authentication (CA) systems based on behavioral biometrics have gained attention. Traditional CA models leverage user–device interactions, such as mouse movements, typing dynamics, and speech recognition. This paper introduces a novel approach that utilizes system performance counters—attributes such as memory usage, CPU load, and network activity—collected passively by operating systems (OSs), to develop a robust and low-intrusive authentication mechanism. Our method employs a deep network architecture combining convolutional neural networks (CNNs) with long short-term memory (LSTM) layers to analyze temporal patterns and identify unique user behaviors. Unlike traditional methods, performance counters capture subtle system-level usage patterns that are harder to mimic, enhancing security and resilience to attacks. We integrate a trust model into the CA framework to balance security and usability by avoiding interruptions for genuine users while blocking impostors in real-time. We evaluate our approach using two new datasets, COUNT-SO-I (26 users) and COUNT-SO-II (37 users), collected in real-world scenarios without specific task constraints. Our results demonstrate the feasibility and effectiveness of the proposed method, achieving 99% detection accuracy (ACC) for impostor users within an average of 17.2 s, while maintaining seamless user experiences. These findings highlight the potential of performance counter–based CA systems for practical applications, such as safeguarding sensitive systems in corporate, governmental, and personal environments.</p>","PeriodicalId":48821,"journal":{"name":"IET Biometrics","volume":"2025 1","pages":""},"PeriodicalIF":1.8,"publicationDate":"2025-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/bme2/8262252","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144897286","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Dynamic Interactive Fusion Model for Extracting Fatigue Features Based on the Audiovisual Data Flow of Air Traffic Controllers 基于空中交通管制员视听数据流的疲劳特征提取动态交互融合模型
IF 1.8 4区 计算机科学
IET Biometrics Pub Date : 2025-08-22 DOI: 10.1049/bme2/7626919
Zhiyuan Shen, Xueyan Li, Junqi Bai, Kai Wang, Yifan Xu
{"title":"A Dynamic Interactive Fusion Model for Extracting Fatigue Features Based on the Audiovisual Data Flow of Air Traffic Controllers","authors":"Zhiyuan Shen,&nbsp;Xueyan Li,&nbsp;Junqi Bai,&nbsp;Kai Wang,&nbsp;Yifan Xu","doi":"10.1049/bme2/7626919","DOIUrl":"10.1049/bme2/7626919","url":null,"abstract":"<p>Fatigue among air traffic controllers is a factor contributing to civil aviation crashes. Existing methods for extracting and fuzing fatigue features encounter two main challenges: (1) the low accuracy of traditional single-mode fatigue recognition methods, and (2) disregarding multimodal data correlations in traditional multimodal methods for feature concatenation and fusion. This paper proposes an interactive algorithm for the fusion and recognition of multimode fatigue features that combines multihead attention (MHA) and cross-attention (XATTN) which are based on an improved speech and facial fatigue recognition model. First, an improved conformer model which combines a convolutional module with a transformer encoder is proposed using the radiotelephony communication data of controllers by employing the filter bank method for extracting profound speech features. Second, facial data of controllers are processed via pointwise convolutions employing a stack of inverted residual layers, which facilitates the extraction of facial features. Third, speech and facial features are fuzed interactively by combining MHA and XATTN, which achieves high accuracy of recognizing the fatigue state of controllers working in complex operational environments. A series of experiments were conducted with audiovisual data sets collected from actual air traffic control (ATC) missions. Comparing with four competing methods for fuzing multimodal features, the experimental results indicate that the proposed method for fuzing multimode features achieved a recognition accuracy of 99.2%, which was 8.9% higher than that for a speech single-mode model and 0.4% higher than that for a facial single-mode model.</p>","PeriodicalId":48821,"journal":{"name":"IET Biometrics","volume":"2025 1","pages":""},"PeriodicalIF":1.8,"publicationDate":"2025-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/bme2/7626919","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144891645","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
FingerUNeSt++: Improving Fingertip Segmentation in Contactless Fingerprint Imaging Using Deep Learning 使用深度学习改进非接触式指纹成像中的指尖分割
IF 1.8 4区 计算机科学
IET Biometrics Pub Date : 2025-07-19 DOI: 10.1049/bme2/9982355
Laurenz Ruzicka, Bernhard Kohn, Clemens Heitzinger
{"title":"FingerUNeSt++: Improving Fingertip Segmentation in Contactless Fingerprint Imaging Using Deep Learning","authors":"Laurenz Ruzicka,&nbsp;Bernhard Kohn,&nbsp;Clemens Heitzinger","doi":"10.1049/bme2/9982355","DOIUrl":"10.1049/bme2/9982355","url":null,"abstract":"<p>Biometric identification systems, particularly those utilizing fingerprints, have become essential as a means of authenticating users due to their reliability and uniqueness. The recent shift towards contactless fingerprint sensors requires precise fingertip segmentation with changing backgrounds, to maintain high accuracy. This study introduces a novel deep learning model combining ResNeSt and UNet++ architectures called FingerUNeSt++, aimed at improving segmentation accuracy and inference speed for contactless fingerprint images. Our model significantly outperforms traditional and state-of-the-art methods, achieving superior performance metrics. Extensive data augmentation and an optimized model architecture contribute to its robustness and efficiency. This advancement holds promise for enhancing the effectiveness of contactless biometric systems in diverse real-world applications.</p>","PeriodicalId":48821,"journal":{"name":"IET Biometrics","volume":"2025 1","pages":""},"PeriodicalIF":1.8,"publicationDate":"2025-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/bme2/9982355","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144662898","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deepfake Video Traceability and Authentication via Source Attribution Deepfake视频可追溯性和来源归属认证
IF 1.8 4区 计算机科学
IET Biometrics Pub Date : 2025-07-13 DOI: 10.1049/bme2/5687970
Canghai Shi, Minglei Qiao, Zhuang Li, Zahid Akhtar, Bin Wang, Meng Han, Tong Qiao
{"title":"Deepfake Video Traceability and Authentication via Source Attribution","authors":"Canghai Shi,&nbsp;Minglei Qiao,&nbsp;Zhuang Li,&nbsp;Zahid Akhtar,&nbsp;Bin Wang,&nbsp;Meng Han,&nbsp;Tong Qiao","doi":"10.1049/bme2/5687970","DOIUrl":"10.1049/bme2/5687970","url":null,"abstract":"<p>In recent years, deepfake videos have emerged as a significant threat to societal and cybersecurity landscapes. Artificial intelligence (AI) techniques are used to create convincing deepfakes. The main counter method is deepfake detection. Currently, most of the mainstream detectors are based on deep neural networks. Such deep learning detection frameworks often face several problems that need to be addressed, for example, dependence on large-annotated datasets, lack of interpretability, and limited attention to source traceability. Towards overcoming these limitations, in this paper, we propose a novel training-free deepfake detection framework based on the interpretable inherent source attribution. The proposed framework not only distinguishes between real and fake videos but also traces their origins using camera fingerprints. Moreover, we have also constructed a new deepfake video dataset from 10 distinct camera devices. Experimental evaluations on multiple datasets show that the proposed method can attain high detection accuracies (ACCs) comparable to state-of-the-art (SOTA) deep learning techniques and also has superior traceability capabilities. This framework provides a robust and efficient solution for deepfake video authentication and source attribution, thus, making it highly adaptable to real-world scenarios.</p>","PeriodicalId":48821,"journal":{"name":"IET Biometrics","volume":"2025 1","pages":""},"PeriodicalIF":1.8,"publicationDate":"2025-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/bme2/5687970","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144615060","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Dermatoglyphic Study of Primary Fingerprints Pattern in Relation to Gender and Blood Group Among Residents of Kathmandu Valley, Nepal 尼泊尔加德满都谷地居民主要指纹纹型与性别、血型的关系研究
IF 1.8 4区 计算机科学
IET Biometrics Pub Date : 2025-06-20 DOI: 10.1049/bme2/9993120
Sushma Paudel, Sushmita Paudel, Samikshya Kafle
{"title":"A Dermatoglyphic Study of Primary Fingerprints Pattern in Relation to Gender and Blood Group Among Residents of Kathmandu Valley, Nepal","authors":"Sushma Paudel,&nbsp;Sushmita Paudel,&nbsp;Samikshya Kafle","doi":"10.1049/bme2/9993120","DOIUrl":"10.1049/bme2/9993120","url":null,"abstract":"<p>Fingerprints are unique biometric identifiers that reflect intricate genetic and environmental/physiological influences. Beyond their forensic significance, they can offer insights into physiological traits like blood groups and gender, which can help in forensic analysis to narrow down the search. This exploratory study aims to identify potential associations between fingerprint patterns, gender, and blood groups within a defined regional cohort in Kathmandu, Nepal. This preliminary study included 290 students (144 males and 146 females) from Himalayan Whitehouse International College (HWIC). Fingerprint patterns (loops, whorls, and arches) were analyzed and compared with participants’ ABO-Rh blood groups. Statistical analyses, including chi-square tests, were used to determine associations and trends. Loops emerged as the most common fingerprint pattern (57.14%), followed by whorls (35%), and arches (7.86%). Blood group B+ve was the most prevalent (33.1%) among the study population in Kathmandu. The significant association between gender and fingerprint pattern was observed. The gender analysis revealed that loops were predominant in females, while males showed a higher frequency of whorls. While no significant relationship was observed between ABO blood groups and fingerprint patterns, a strong association was found between fingerprint patterns and Rh factor (<i>p</i> = 0.0496). Loops were more prevalent among Rh-positive (Rh+ve) individuals, while whorls were more common among Rh-negative (Rh−ve) individuals. Additionally, specific fingers were observed to have distinct fingerprint patterns more frequently. Arches were most prevalent in the index finger of both hands, loops were most abundant in both pinky finger, and left middle finger. Whorls were most frequently observed in ring finger of both hands and right thumb. The findings reinforce global patterns of blood group and fingerprint distribution, where Rh+ve individuals represent the majority and loops are most dominant fingerprint pattern. The gender-specific trends suggest the nuanced interplay of genetics, with females displaying a higher frequency of loops and males showing more whorls. Similarly, some blood group are more likely to exhibit a specific set of fingerprint patterns. This research clearly shows the gender-based differences and influence of genetic factors on fingerprint patterns, particularly the Rh factor. These findings contribute to the growing field of dermatoglyphics, with implications for forensic science, and population genetics.</p>","PeriodicalId":48821,"journal":{"name":"IET Biometrics","volume":"2025 1","pages":""},"PeriodicalIF":1.8,"publicationDate":"2025-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/bme2/9993120","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144323692","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Advanced Image Quality Assessment for Hand- and Finger-Vein Biometrics 手部和手指静脉生物识别的高级图像质量评估
IF 1.8 4区 计算机科学
IET Biometrics Pub Date : 2025-05-05 DOI: 10.1049/bme2/8869140
Simon Kirchgasser, Christof Kauba, Georg Wimmer, Andreas Uhl
{"title":"Advanced Image Quality Assessment for Hand- and Finger-Vein Biometrics","authors":"Simon Kirchgasser,&nbsp;Christof Kauba,&nbsp;Georg Wimmer,&nbsp;Andreas Uhl","doi":"10.1049/bme2/8869140","DOIUrl":"10.1049/bme2/8869140","url":null,"abstract":"<p>Natural scene statistics commonly used in nonreference image quality measures and a proposed deep-learning (DL)–based quality assessment approach are suggested as biometric quality indicators for vasculature images. While NIQE (natural image quality evaluator) and BRISQUE (blind/referenceless image spatial quality evaluator) if trained in common images with usual distortions do not work well for assessing vasculature pattern samples’ quality, their variants being trained on high- and low-quality vasculature sample data behave as expected from a biometric quality estimator in most cases (deviations from the overall trend occur for certain datasets or feature extraction methods). A DL-based quality metric is proposed in this work and designed to be capable of assigning the correct quality class to the vasculature pattern samples in most cases, independent of finger or hand vein patterns being assessed. The experiments, evaluating NIQE, BRISQUE, and the newly proposed DL quality metrics, were conducted on a total of 13 publicly available finger and hand vein datasets and involve three distinct template representations (two of them especially designed for vascular biometrics). The proposed (trained) quality measure(s) are compared to several classical quality metrics, with their achieved results underlining their promising behavior.</p>","PeriodicalId":48821,"journal":{"name":"IET Biometrics","volume":"2025 1","pages":""},"PeriodicalIF":1.8,"publicationDate":"2025-05-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/bme2/8869140","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143909172","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep Distillation Hashing for Palmprint and Finger Vein Retrieval 掌纹和指静脉检索的深度蒸馏哈希
IF 1.8 4区 计算机科学
IET Biometrics Pub Date : 2025-04-25 DOI: 10.1049/bme2/9017371
Chenlong Liu, Lu Yang, Wen Zhou, Yuan Li, Fanchang Hao
{"title":"Deep Distillation Hashing for Palmprint and Finger Vein Retrieval","authors":"Chenlong Liu,&nbsp;Lu Yang,&nbsp;Wen Zhou,&nbsp;Yuan Li,&nbsp;Fanchang Hao","doi":"10.1049/bme2/9017371","DOIUrl":"10.1049/bme2/9017371","url":null,"abstract":"<p>With the increasing application of biometric recognition technology in daily life, the number of registered users is rapidly growing, making fast retrieval techniques increasingly important for biometric recognition. However, existing biometric recognition models are often overly complex, making them difficult to deploy on resource-constrained terminal devices. Inspired by knowledge distillation (KD) for model simplification and deep hashing for fast image retrieval, we propose a new model that achieves lightweight palmprint and finger vein retrieval. This model integrates hash distillation loss, classification distillation loss, and supervised loss from labels within a KD framework. And it improves the retrieval and recognition performance of the lightweight model through the network design. Experimental results demonstrate that this method promotes the performance of the student model on multiple palmprint and finger vein datasets, with retrieval precision and recognition accuracy surpassing several existing advanced hashing methods.</p>","PeriodicalId":48821,"journal":{"name":"IET Biometrics","volume":"2025 1","pages":""},"PeriodicalIF":1.8,"publicationDate":"2025-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/bme2/9017371","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143871822","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Wavelet-Based Texture Mining and Enhancement for Face Forgery Detection 基于小波的纹理挖掘与增强人脸伪造检测
IF 1.8 4区 计算机科学
IET Biometrics Pub Date : 2025-02-13 DOI: 10.1049/bme2/2217175
Xin Li, Hui Zhao, Bingxin Xu, Hongzhe Liu
{"title":"Wavelet-Based Texture Mining and Enhancement for Face Forgery Detection","authors":"Xin Li,&nbsp;Hui Zhao,&nbsp;Bingxin Xu,&nbsp;Hongzhe Liu","doi":"10.1049/bme2/2217175","DOIUrl":"10.1049/bme2/2217175","url":null,"abstract":"<p>Due to the abuse of deep forgery technology, the research on forgery detection methods has become increasingly urgent. The corresponding relationship between the frequency spectrum information and the spatial clues, which is often neglected by current methods, could be conducive to a more accurate and generalized forgery detection. Motivated by this inspiration, we propose a wavelet-based texture mining and enhancement framework for face forgery detection. First, we introduce a frequency-guided texture enhancement (FGTE) module that mining the high-frequency information to improve the network’s extraction of effective texture features. Next, we propose a global–local feature refinement (GLFR) module to enhance the model’s leverage of both global semantic features and local texture features. Moreover, the interactive fusion module (IFM) is designed to fully incorporate the enhanced texture clues with spatial features. The proposed method has been extensively evaluated on five public datasets, such as FaceForensics++ (FF++), deepfake (DF) detection (DFD) challenge (DFDC), Celeb-DFv2, DFDC preview (DFDC-P), and DFD, for face forgery detection, yielding promising performance within and cross dataset experiments.</p>","PeriodicalId":48821,"journal":{"name":"IET Biometrics","volume":"2025 1","pages":""},"PeriodicalIF":1.8,"publicationDate":"2025-02-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/bme2/2217175","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143404595","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信