Disentangling different levels of GAN fingerprints for task-specific forensics

IF 4.1 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE
Chi Liu , Tianqing Zhu , Yuan Zhao , Jun Zhang , Wanlei Zhou
{"title":"Disentangling different levels of GAN fingerprints for task-specific forensics","authors":"Chi Liu ,&nbsp;Tianqing Zhu ,&nbsp;Yuan Zhao ,&nbsp;Jun Zhang ,&nbsp;Wanlei Zhou","doi":"10.1016/j.csi.2023.103825","DOIUrl":null,"url":null,"abstract":"<div><p><span>Image generation using </span>generative adversarial networks<span> (GANs) has raised new security challenges recently. One promising forensic solution is verifying whether or not a suspicious image contains a GAN fingerprint, a unique trace left behind by the source GAN. Previous methods mainly focused on GAN fingerprint extraction while underestimating the downstream forensic applications<span>, and the fingerprints are often single-level which only supports one specific forensic task. In this study, we investigate the problem of disentangling different levels of GAN fingerprints to satisfy the need for varying forensics tasks. Based on an analysis of fingerprint dependency revealing the existence of two levels of fingerprints in different signal domains, we proposed a decoupling representation framework to separate and extract two types of GAN fingerprints from different domains. An adversarial data augmentation strategy plus a transformation-invariant loss is added to the framework to enhance the robustness of fingerprints to image perturbations. Then we elaborated on three typical forensics tasks and the task-specific fingerprinting using different GAN fingerprints. Extensive experiments have verified our dependency analysis, the effectiveness and robustness of the proposed fingerprint extraction framework, and the applicability of task-specific fingerprinting in real-world and simulated scenarios.</span></span></p></div>","PeriodicalId":50635,"journal":{"name":"Computer Standards & Interfaces","volume":null,"pages":null},"PeriodicalIF":4.1000,"publicationDate":"2023-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computer Standards & Interfaces","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S092054892300106X","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE","Score":null,"Total":0}
引用次数: 0

Abstract

Image generation using generative adversarial networks (GANs) has raised new security challenges recently. One promising forensic solution is verifying whether or not a suspicious image contains a GAN fingerprint, a unique trace left behind by the source GAN. Previous methods mainly focused on GAN fingerprint extraction while underestimating the downstream forensic applications, and the fingerprints are often single-level which only supports one specific forensic task. In this study, we investigate the problem of disentangling different levels of GAN fingerprints to satisfy the need for varying forensics tasks. Based on an analysis of fingerprint dependency revealing the existence of two levels of fingerprints in different signal domains, we proposed a decoupling representation framework to separate and extract two types of GAN fingerprints from different domains. An adversarial data augmentation strategy plus a transformation-invariant loss is added to the framework to enhance the robustness of fingerprints to image perturbations. Then we elaborated on three typical forensics tasks and the task-specific fingerprinting using different GAN fingerprints. Extensive experiments have verified our dependency analysis, the effectiveness and robustness of the proposed fingerprint extraction framework, and the applicability of task-specific fingerprinting in real-world and simulated scenarios.

为特定任务取证分离不同层次的 GAN 指纹
使用生成式对抗网络(GAN)生成图像最近提出了新的安全挑战。一个有前景的取证解决方案是验证可疑图像是否包含 GAN 指纹,即源 GAN 留下的独特痕迹。以往的方法主要侧重于 GAN 指纹提取,而低估了下游的取证应用,而且指纹往往是单层的,只能支持一种特定的取证任务。在本研究中,我们研究了分解不同层次的 GAN 指纹以满足不同取证任务需求的问题。通过对指纹依赖性的分析,我们发现在不同的信号域中存在两种级别的指纹,基于此,我们提出了一种解耦表示框架,用于分离和提取来自不同域的两种类型的 GAN 指纹。在该框架中加入了对抗数据增强策略和变换不变损失,以增强指纹对图像扰动的鲁棒性。然后,我们详细介绍了三个典型的取证任务,并使用不同的 GAN 指纹进行了特定任务的指纹识别。广泛的实验验证了我们的依赖性分析、所提出的指纹提取框架的有效性和鲁棒性,以及特定任务指纹法在现实世界和模拟场景中的适用性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Computer Standards & Interfaces
Computer Standards & Interfaces 工程技术-计算机:软件工程
CiteScore
11.90
自引率
16.00%
发文量
67
审稿时长
6 months
期刊介绍: The quality of software, well-defined interfaces (hardware and software), the process of digitalisation, and accepted standards in these fields are essential for building and exploiting complex computing, communication, multimedia and measuring systems. Standards can simplify the design and construction of individual hardware and software components and help to ensure satisfactory interworking. Computer Standards & Interfaces is an international journal dealing specifically with these topics. The journal • Provides information about activities and progress on the definition of computer standards, software quality, interfaces and methods, at national, European and international levels • Publishes critical comments on standards and standards activities • Disseminates user''s experiences and case studies in the application and exploitation of established or emerging standards, interfaces and methods • Offers a forum for discussion on actual projects, standards, interfaces and methods by recognised experts • Stimulates relevant research by providing a specialised refereed medium.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信