制作不可感知和可转移的对抗示例:利用条件残差发生器和小波变换欺骗深度伪造检测

Zhiyuan Li, Xin Jin, Qian Jiang, Puming Wang, Shin-Jye Lee, Shaowen Yao, Wei Zhou
{"title":"制作不可感知和可转移的对抗示例:利用条件残差发生器和小波变换欺骗深度伪造检测","authors":"Zhiyuan Li, Xin Jin, Qian Jiang, Puming Wang, Shin-Jye Lee, Shaowen Yao, Wei Zhou","doi":"10.1007/s00371-024-03605-x","DOIUrl":null,"url":null,"abstract":"<p>The malicious abuse of deepfakes has raised serious ethical, security, and privacy concerns, eroding public trust in digital media. While existing deepfake detectors can detect fake images, they are vulnerable to adversarial attacks. Although various adversarial attacks have been explored, most are white-box attacks difficult to realize in practice, and the generated adversarial examples have poor quality easily noticeable to the human eye. For this detection task, the goal should be to generate adversarial examples that can deceive detectors while maintaining high quality and authenticity. We propose a method to generate imperceptible and transferable adversarial examples aimed at fooling unknown deepfake detectors. The method combines a conditional residual generator with an accessible detector as a surrogate model, utilizing the detector’s relative distance loss function to generate highly transferable adversarial examples. Discrete wavelet transform is also introduced to enhance image quality. Extensive experiments demonstrate that the adversarial examples generated by our method not only possess excellent visual quality but also effectively deceive various detectors, exhibiting superior cross-detector transferability in black-box attacks. Our code is available at:https://github.com/SiSuiyuHang/ITA.</p>","PeriodicalId":501186,"journal":{"name":"The Visual Computer","volume":"23 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Crafting imperceptible and transferable adversarial examples: leveraging conditional residual generator and wavelet transforms to deceive deepfake detection\",\"authors\":\"Zhiyuan Li, Xin Jin, Qian Jiang, Puming Wang, Shin-Jye Lee, Shaowen Yao, Wei Zhou\",\"doi\":\"10.1007/s00371-024-03605-x\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>The malicious abuse of deepfakes has raised serious ethical, security, and privacy concerns, eroding public trust in digital media. While existing deepfake detectors can detect fake images, they are vulnerable to adversarial attacks. Although various adversarial attacks have been explored, most are white-box attacks difficult to realize in practice, and the generated adversarial examples have poor quality easily noticeable to the human eye. For this detection task, the goal should be to generate adversarial examples that can deceive detectors while maintaining high quality and authenticity. We propose a method to generate imperceptible and transferable adversarial examples aimed at fooling unknown deepfake detectors. The method combines a conditional residual generator with an accessible detector as a surrogate model, utilizing the detector’s relative distance loss function to generate highly transferable adversarial examples. Discrete wavelet transform is also introduced to enhance image quality. Extensive experiments demonstrate that the adversarial examples generated by our method not only possess excellent visual quality but also effectively deceive various detectors, exhibiting superior cross-detector transferability in black-box attacks. Our code is available at:https://github.com/SiSuiyuHang/ITA.</p>\",\"PeriodicalId\":501186,\"journal\":{\"name\":\"The Visual Computer\",\"volume\":\"23 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-09-06\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"The Visual Computer\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1007/s00371-024-03605-x\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"The Visual Computer","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1007/s00371-024-03605-x","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

恶意滥用深度伪造图像引发了严重的道德、安全和隐私问题,削弱了公众对数字媒体的信任。虽然现有的深度伪造检测器可以检测出伪造图像,但它们很容易受到对抗性攻击。虽然人们已经探索了各种对抗攻击,但大多数都是白盒攻击,很难在实践中实现,而且生成的对抗示例质量很差,人眼很容易察觉。对于这项检测任务,目标应该是生成既能欺骗检测器,又能保持高质量和真实性的对抗示例。我们提出了一种生成不易察觉且可转移的对抗示例的方法,旨在欺骗未知的深度伪造检测器。该方法将条件残差生成器与作为替代模型的可访问检测器相结合,利用检测器的相对距离损失函数生成高度可转移的对抗示例。此外,还引入了离散小波变换来提高图像质量。大量实验证明,我们的方法生成的对抗示例不仅具有出色的视觉质量,还能有效欺骗各种检测器,在黑盒攻击中表现出卓越的跨检测器可转移性。我们的代码可在以下网址获取:https://github.com/SiSuiyuHang/ITA。
本文章由计算机程序翻译,如有差异,请以英文原文为准。

Crafting imperceptible and transferable adversarial examples: leveraging conditional residual generator and wavelet transforms to deceive deepfake detection

Crafting imperceptible and transferable adversarial examples: leveraging conditional residual generator and wavelet transforms to deceive deepfake detection

The malicious abuse of deepfakes has raised serious ethical, security, and privacy concerns, eroding public trust in digital media. While existing deepfake detectors can detect fake images, they are vulnerable to adversarial attacks. Although various adversarial attacks have been explored, most are white-box attacks difficult to realize in practice, and the generated adversarial examples have poor quality easily noticeable to the human eye. For this detection task, the goal should be to generate adversarial examples that can deceive detectors while maintaining high quality and authenticity. We propose a method to generate imperceptible and transferable adversarial examples aimed at fooling unknown deepfake detectors. The method combines a conditional residual generator with an accessible detector as a surrogate model, utilizing the detector’s relative distance loss function to generate highly transferable adversarial examples. Discrete wavelet transform is also introduced to enhance image quality. Extensive experiments demonstrate that the adversarial examples generated by our method not only possess excellent visual quality but also effectively deceive various detectors, exhibiting superior cross-detector transferability in black-box attacks. Our code is available at:https://github.com/SiSuiyuHang/ITA.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信