Attention Consistency Refined Masked Frequency Forgery Representation for Generalizing Face Forgery Detection

IF 6.3 1区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS
Decheng Liu;Tao Chen;Chunlei Peng;Nannan Wang;Ruimin Hu;Xinbo Gao
{"title":"Attention Consistency Refined Masked Frequency Forgery Representation for Generalizing Face Forgery Detection","authors":"Decheng Liu;Tao Chen;Chunlei Peng;Nannan Wang;Ruimin Hu;Xinbo Gao","doi":"10.1109/TIFS.2024.3516561","DOIUrl":null,"url":null,"abstract":"Due to the successful development of deep image generation technology, visual data forgery detection would play a more important role in social and economic security. Existing forgery detection methods suffer from unsatisfactory generalization ability to determine the authenticity in the unseen domain. In this paper, we propose a novel Attention Consistency Refined masked frequency forgery representation model toward a generalizing face forgery detection algorithm (ACMF). Most forgery technologies always bring in high-frequency aware cues, which make it easy to distinguish source authenticity but difficult to generalize to unseen artifact types. The masked frequency forgery representation module is designed to explore robust forgery cues by randomly discarding high-frequency information. In addition, we find that the forgery saliency map inconsistency through the detection network could affect the generalizability. Thus, the forgery attention consistency is introduced to force detectors to focus on similar attention regions for better generalization ability. Experiment results on several public face forgery datasets (FaceForensic++, DFD, Celeb-DF, WDF and DFDC datasets) demonstrate the superior performance of the proposed method compared with the state-of-the-art methods. The source code and models are publicly available at \n<uri>https://github.com/chenboluo/ACMF</uri>\n.","PeriodicalId":13492,"journal":{"name":"IEEE Transactions on Information Forensics and Security","volume":"20 ","pages":"504-515"},"PeriodicalIF":6.3000,"publicationDate":"2024-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Information Forensics and Security","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10795239/","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, THEORY & METHODS","Score":null,"Total":0}
引用次数: 0

Abstract

Due to the successful development of deep image generation technology, visual data forgery detection would play a more important role in social and economic security. Existing forgery detection methods suffer from unsatisfactory generalization ability to determine the authenticity in the unseen domain. In this paper, we propose a novel Attention Consistency Refined masked frequency forgery representation model toward a generalizing face forgery detection algorithm (ACMF). Most forgery technologies always bring in high-frequency aware cues, which make it easy to distinguish source authenticity but difficult to generalize to unseen artifact types. The masked frequency forgery representation module is designed to explore robust forgery cues by randomly discarding high-frequency information. In addition, we find that the forgery saliency map inconsistency through the detection network could affect the generalizability. Thus, the forgery attention consistency is introduced to force detectors to focus on similar attention regions for better generalization ability. Experiment results on several public face forgery datasets (FaceForensic++, DFD, Celeb-DF, WDF and DFDC datasets) demonstrate the superior performance of the proposed method compared with the state-of-the-art methods. The source code and models are publicly available at https://github.com/chenboluo/ACMF .
关注一致性改进掩码频率伪造表示泛化人脸伪造检测
随着深度图像生成技术的成功发展,视觉数据伪造检测将在社会经济安全领域发挥越来越重要的作用。现有的伪造检测方法在不可见域的真伪判定泛化能力不理想。针对广义人脸伪造检测算法(ACMF),提出了一种新的注意力一致性改进掩码频率伪造表示模型。大多数伪造技术都引入了高频感知线索,这使得识别源真伪变得容易,但很难推广到不可见的伪制品类型。掩蔽频率伪造表示模块通过随机丢弃高频信息来探索鲁棒伪造线索。此外,通过检测网络发现伪造显著性图的不一致性会影响检测结果的泛化性。因此,将伪造注意一致性引入到力检测器中,使其集中在相似的注意区域,从而提高泛化能力。在多个公开人脸伪造数据集(facefrence++、DFD、Celeb-DF、WDF和DFDC数据集)上的实验结果表明,与现有方法相比,该方法具有优越的性能。源代码和模型可在https://github.com/chenboluo/ACMF上公开获得。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
IEEE Transactions on Information Forensics and Security
IEEE Transactions on Information Forensics and Security 工程技术-工程:电子与电气
CiteScore
14.40
自引率
7.40%
发文量
234
审稿时长
6.5 months
期刊介绍: The IEEE Transactions on Information Forensics and Security covers the sciences, technologies, and applications relating to information forensics, information security, biometrics, surveillance and systems applications that incorporate these features
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信