Mf-net:基于双流提取和多尺度增强的多特征融合网络,用于人脸伪造检测

IF 5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Hanxian Duan, Qian Jiang, Xin Jin, Michal Wozniak, Yi Zhao, Liwen Wu, Shaowen Yao, Wei Zhou
{"title":"Mf-net:基于双流提取和多尺度增强的多特征融合网络,用于人脸伪造检测","authors":"Hanxian Duan, Qian Jiang, Xin Jin, Michal Wozniak, Yi Zhao, Liwen Wu, Shaowen Yao, Wei Zhou","doi":"10.1007/s40747-024-01634-6","DOIUrl":null,"url":null,"abstract":"<p>Due to the increasing sophistication of face forgery techniques, the images generated are becoming more and more realistic and difficult for human eyes to distinguish. These face forgery techniques can cause problems such as fraud and social engineering attacks in facial recognition and identity verification areas. Therefore, researchers have worked on face forgery detection studies and have made significant progress. Current face forgery detection algorithms achieve high detection accuracy within-dataset. However, it is difficult to achieve satisfactory generalization performance in cross-dataset scenarios. In order to improve the cross-dataset detection performance of the model, this paper proposes a multi-feature fusion network based on two-stream extraction and multi-scale enhancement. First, we design a two-stream feature extraction module to obtain richer feature information. Secondly, the multi-scale feature enhancement module is proposed to focus the model more on information related to the current sub-region from different scales. Finally, the forgery detection module calculates the overlap between the features of the input image and real images during the training phase to determine the forgery regions. The method encourages the model to mine forgery features and learns generic and robust features not limited to a particular feature. Thus, the model achieves high detection accuracy and performance. We achieve the AUC of 99.70% and 90.71% on FaceForensics++ and WildDeepfake datasets. The generalization experiments on Celeb-DF-v2 and WildDeepfake datasets achieve the AUC of 80.16% and 65.15%. Comparison experiments with multiple methods on other benchmark datasets confirm the superior generalization performance of our proposed method while ensuring model detection accuracy. Our code can be found at: https://github.com/1241128239/MFNet.</p>","PeriodicalId":10524,"journal":{"name":"Complex & Intelligent Systems","volume":"1 1","pages":""},"PeriodicalIF":5.0000,"publicationDate":"2024-11-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Mf-net: multi-feature fusion network based on two-stream extraction and multi-scale enhancement for face forgery detection\",\"authors\":\"Hanxian Duan, Qian Jiang, Xin Jin, Michal Wozniak, Yi Zhao, Liwen Wu, Shaowen Yao, Wei Zhou\",\"doi\":\"10.1007/s40747-024-01634-6\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>Due to the increasing sophistication of face forgery techniques, the images generated are becoming more and more realistic and difficult for human eyes to distinguish. These face forgery techniques can cause problems such as fraud and social engineering attacks in facial recognition and identity verification areas. Therefore, researchers have worked on face forgery detection studies and have made significant progress. Current face forgery detection algorithms achieve high detection accuracy within-dataset. However, it is difficult to achieve satisfactory generalization performance in cross-dataset scenarios. In order to improve the cross-dataset detection performance of the model, this paper proposes a multi-feature fusion network based on two-stream extraction and multi-scale enhancement. First, we design a two-stream feature extraction module to obtain richer feature information. Secondly, the multi-scale feature enhancement module is proposed to focus the model more on information related to the current sub-region from different scales. Finally, the forgery detection module calculates the overlap between the features of the input image and real images during the training phase to determine the forgery regions. The method encourages the model to mine forgery features and learns generic and robust features not limited to a particular feature. Thus, the model achieves high detection accuracy and performance. We achieve the AUC of 99.70% and 90.71% on FaceForensics++ and WildDeepfake datasets. The generalization experiments on Celeb-DF-v2 and WildDeepfake datasets achieve the AUC of 80.16% and 65.15%. Comparison experiments with multiple methods on other benchmark datasets confirm the superior generalization performance of our proposed method while ensuring model detection accuracy. Our code can be found at: https://github.com/1241128239/MFNet.</p>\",\"PeriodicalId\":10524,\"journal\":{\"name\":\"Complex & Intelligent Systems\",\"volume\":\"1 1\",\"pages\":\"\"},\"PeriodicalIF\":5.0000,\"publicationDate\":\"2024-11-09\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Complex & Intelligent Systems\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://doi.org/10.1007/s40747-024-01634-6\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Complex & Intelligent Systems","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1007/s40747-024-01634-6","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

摘要

由于人脸伪造技术越来越先进,生成的图像也越来越逼真,人眼难以分辨。这些人脸伪造技术会在人脸识别和身份验证领域造成欺诈和社会工程攻击等问题。因此,研究人员致力于人脸伪造检测研究,并取得了重大进展。目前的人脸伪造检测算法在数据集内可以达到很高的检测精度。然而,在跨数据集的情况下,很难达到令人满意的泛化性能。为了提高模型的跨数据集检测性能,本文提出了一种基于双流提取和多尺度增强的多特征融合网络。首先,我们设计了双流特征提取模块,以获取更丰富的特征信息。其次,提出了多尺度特征增强模块,使模型更关注不同尺度的与当前子区域相关的信息。最后,伪造检测模块在训练阶段计算输入图像与真实图像的特征重叠度,以确定伪造区域。该方法鼓励模型挖掘伪造特征,并学习不局限于特定特征的通用和鲁棒特征。因此,该模型具有很高的检测精度和性能。我们在 FaceForensics++ 和 WildDeepfake 数据集上的 AUC 分别达到了 99.70% 和 90.71%。在 Celeb-DF-v2 和 WildDeepfake 数据集上的泛化实验的 AUC 分别达到了 80.16% 和 65.15%。在其他基准数据集上与多种方法的对比实验证实了我们提出的方法在确保模型检测准确性的同时,还具有卓越的泛化性能。我们的代码见:https://github.com/1241128239/MFNet。
本文章由计算机程序翻译,如有差异,请以英文原文为准。

Mf-net: multi-feature fusion network based on two-stream extraction and multi-scale enhancement for face forgery detection

Mf-net: multi-feature fusion network based on two-stream extraction and multi-scale enhancement for face forgery detection

Due to the increasing sophistication of face forgery techniques, the images generated are becoming more and more realistic and difficult for human eyes to distinguish. These face forgery techniques can cause problems such as fraud and social engineering attacks in facial recognition and identity verification areas. Therefore, researchers have worked on face forgery detection studies and have made significant progress. Current face forgery detection algorithms achieve high detection accuracy within-dataset. However, it is difficult to achieve satisfactory generalization performance in cross-dataset scenarios. In order to improve the cross-dataset detection performance of the model, this paper proposes a multi-feature fusion network based on two-stream extraction and multi-scale enhancement. First, we design a two-stream feature extraction module to obtain richer feature information. Secondly, the multi-scale feature enhancement module is proposed to focus the model more on information related to the current sub-region from different scales. Finally, the forgery detection module calculates the overlap between the features of the input image and real images during the training phase to determine the forgery regions. The method encourages the model to mine forgery features and learns generic and robust features not limited to a particular feature. Thus, the model achieves high detection accuracy and performance. We achieve the AUC of 99.70% and 90.71% on FaceForensics++ and WildDeepfake datasets. The generalization experiments on Celeb-DF-v2 and WildDeepfake datasets achieve the AUC of 80.16% and 65.15%. Comparison experiments with multiple methods on other benchmark datasets confirm the superior generalization performance of our proposed method while ensuring model detection accuracy. Our code can be found at: https://github.com/1241128239/MFNet.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Complex & Intelligent Systems
Complex & Intelligent Systems COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE-
CiteScore
9.60
自引率
10.30%
发文量
297
期刊介绍: Complex & Intelligent Systems aims to provide a forum for presenting and discussing novel approaches, tools and techniques meant for attaining a cross-fertilization between the broad fields of complex systems, computational simulation, and intelligent analytics and visualization. The transdisciplinary research that the journal focuses on will expand the boundaries of our understanding by investigating the principles and processes that underlie many of the most profound problems facing society today.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信