Exposing the Forgery Clues of DeepFakes via Exploring the Inconsistent Expression Cues

IF 5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Jiatong Liu, Lina Wang, Run Wang, Jianpeng Ke, Xi Ye, Yadi Wu
{"title":"Exposing the Forgery Clues of DeepFakes via Exploring the Inconsistent Expression Cues","authors":"Jiatong Liu,&nbsp;Lina Wang,&nbsp;Run Wang,&nbsp;Jianpeng Ke,&nbsp;Xi Ye,&nbsp;Yadi Wu","doi":"10.1155/int/7945646","DOIUrl":null,"url":null,"abstract":"<div>\n <p>The pervasive prevalence of DeepFakes poses a profound threat to individual privacy and the stability of society. Believing the synthetic videos of a celebrity and trumping up impersonated forgery videos as authentic are just a few consequences generated by DeepFakes. We investigate current detectors that blindly deploy deep learning techniques that are not effective in capturing subtle clues of forgery when generative models produce remarkably realistic faces. Inspired by the fact that synthetic operations inevitably modify the regions of eyes and mouth to match the target face with the identity or expression of the source face, we conjecture that the continuity of facial movement patterns representing expressions that existed in the veritable faces will be disrupted or completely broken in synthetic faces, making it a potentially formidable indicator for DeepFake detection. To prove this conjecture, we utilize a dual-branch network to capture the inconsistent patterns of facial movements within eyes and mouth regions separately. Extensive experiments on popular FaceForensics++, Celeb-DF-v1, Celeb-DF-v2, and DFDC-Preview datasets have demonstrated not only effectiveness but also the robust capability of our method to outperform the state-of-the-art baselines. Moreover, this work represents greater robustness against adversarial attacks, achieving ASR of 54.8% in the I-FGSM attack and 43.1% in the PGD attack on the DeepFakes dataset of FaceForensics++, respectively.</p>\n </div>","PeriodicalId":14089,"journal":{"name":"International Journal of Intelligent Systems","volume":"2025 1","pages":""},"PeriodicalIF":5.0000,"publicationDate":"2025-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1155/int/7945646","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Intelligent Systems","FirstCategoryId":"94","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1155/int/7945646","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

The pervasive prevalence of DeepFakes poses a profound threat to individual privacy and the stability of society. Believing the synthetic videos of a celebrity and trumping up impersonated forgery videos as authentic are just a few consequences generated by DeepFakes. We investigate current detectors that blindly deploy deep learning techniques that are not effective in capturing subtle clues of forgery when generative models produce remarkably realistic faces. Inspired by the fact that synthetic operations inevitably modify the regions of eyes and mouth to match the target face with the identity or expression of the source face, we conjecture that the continuity of facial movement patterns representing expressions that existed in the veritable faces will be disrupted or completely broken in synthetic faces, making it a potentially formidable indicator for DeepFake detection. To prove this conjecture, we utilize a dual-branch network to capture the inconsistent patterns of facial movements within eyes and mouth regions separately. Extensive experiments on popular FaceForensics++, Celeb-DF-v1, Celeb-DF-v2, and DFDC-Preview datasets have demonstrated not only effectiveness but also the robust capability of our method to outperform the state-of-the-art baselines. Moreover, this work represents greater robustness against adversarial attacks, achieving ASR of 54.8% in the I-FGSM attack and 43.1% in the PGD attack on the DeepFakes dataset of FaceForensics++, respectively.

Abstract Image

DeepFakes 的普遍存在对个人隐私和社会稳定构成了深远的威胁。相信名人的合成视频,将冒充的伪造视频篡改为真实视频,这些都是 DeepFakes 造成的后果。我们研究了当前盲目部署深度学习技术的检测器,当生成模型生成非常逼真的人脸时,这些检测器无法有效捕捉伪造的细微线索。合成操作不可避免地会修改眼睛和嘴巴的区域,以使目标人脸与源人脸的身份或表情相匹配,受此启发,我们推测,在真实人脸中存在的代表表情的面部运动模式的连续性在合成人脸中会被破坏或完全中断,从而使其成为 DeepFake 检测的潜在有力指标。为了证明这一猜想,我们利用双分支网络分别捕捉眼睛和嘴巴区域的面部运动不一致模式。在流行的 FaceForensics++、Celeb-DF-v1、Celeb-DF-v2 和 DFDC-Preview 数据集上进行的广泛实验表明,我们的方法不仅有效,而且具有超越最先进基线的鲁棒性。此外,这项研究还具有更强的抗对抗性攻击能力,在FaceForensics++的DeepFakes数据集上,I-FGSM攻击和PGD攻击的ASR分别达到54.8%和43.1%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
International Journal of Intelligent Systems
International Journal of Intelligent Systems 工程技术-计算机:人工智能
CiteScore
11.30
自引率
14.30%
发文量
304
审稿时长
9 months
期刊介绍: The International Journal of Intelligent Systems serves as a forum for individuals interested in tapping into the vast theories based on intelligent systems construction. With its peer-reviewed format, the journal explores several fascinating editorials written by today''s experts in the field. Because new developments are being introduced each day, there''s much to be learned — examination, analysis creation, information retrieval, man–computer interactions, and more. The International Journal of Intelligent Systems uses charts and illustrations to demonstrate these ground-breaking issues, and encourages readers to share their thoughts and experiences.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信