DCM-Net: A Diffusion Model-Based Detection Network Integrating the Characteristics of Copy-Move Forgery

IF 8.4 1区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS
Shaowei Weng;Jianhao Zhang;Tanguo Zhu;Lifang Yu;Chunyu Zhang
{"title":"DCM-Net: A Diffusion Model-Based Detection Network Integrating the Characteristics of Copy-Move Forgery","authors":"Shaowei Weng;Jianhao Zhang;Tanguo Zhu;Lifang Yu;Chunyu Zhang","doi":"10.1109/TMM.2024.3521685","DOIUrl":null,"url":null,"abstract":"Essentially, directly introducing any object detection network to perform copy-move forgery detection (CMFD) inevitably leads to low detection accuracy. Therefore, DCM-Net, an object detection network dominated by diffusion model that incorporates the characteristics of copy-move forgery, is proposed in this paper for obviously enhancing CMFD performance. DCM-Net, as the first diffusion model-based CMFD network, has the following three improvements. Firstly, the high-similarity box padding strategy pads high-similarity boxes, rather than random boxes used in diffusion model, to ground truth boxes, better guiding subsequent dual-attention detection heads (DDHs) to focus more on high-similarity regions. Secondly, different from previous deep learning based CMFD networks that utilize self-correlation calculation to indiscriminately transform all classification features extracted from feature extraction into high-similarly features, an adaptive feature combination strategy is proposed to obtain the optimal feature transformation capable of achieving the best detection performance, enabling DDHs to more effectively distinguish source and target regions. Finally, to make detection heads have more accurate source/target localization and distinguishment, DDHs equipped with efficient multi-scale attention and contextual transformer, are proposed to generate tampered features fusing the entire precise spatial position information and rich contextual global information. The experimental results carried out on three publicly available datasets including USC-ISI, CoMoFoD, and COVERAGE, demonstrate that DCM-Net outperforms several advanced algorithms in terms of similarity detection ability and source/target differentiation ability.","PeriodicalId":13273,"journal":{"name":"IEEE Transactions on Multimedia","volume":"27 ","pages":"503-514"},"PeriodicalIF":8.4000,"publicationDate":"2024-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Multimedia","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10814718/","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0

Abstract

Essentially, directly introducing any object detection network to perform copy-move forgery detection (CMFD) inevitably leads to low detection accuracy. Therefore, DCM-Net, an object detection network dominated by diffusion model that incorporates the characteristics of copy-move forgery, is proposed in this paper for obviously enhancing CMFD performance. DCM-Net, as the first diffusion model-based CMFD network, has the following three improvements. Firstly, the high-similarity box padding strategy pads high-similarity boxes, rather than random boxes used in diffusion model, to ground truth boxes, better guiding subsequent dual-attention detection heads (DDHs) to focus more on high-similarity regions. Secondly, different from previous deep learning based CMFD networks that utilize self-correlation calculation to indiscriminately transform all classification features extracted from feature extraction into high-similarly features, an adaptive feature combination strategy is proposed to obtain the optimal feature transformation capable of achieving the best detection performance, enabling DDHs to more effectively distinguish source and target regions. Finally, to make detection heads have more accurate source/target localization and distinguishment, DDHs equipped with efficient multi-scale attention and contextual transformer, are proposed to generate tampered features fusing the entire precise spatial position information and rich contextual global information. The experimental results carried out on three publicly available datasets including USC-ISI, CoMoFoD, and COVERAGE, demonstrate that DCM-Net outperforms several advanced algorithms in terms of similarity detection ability and source/target differentiation ability.
DCM-Net:一种融合复制-移动伪造特征的基于扩散模型的检测网络
从本质上讲,直接引入任何对象检测网络进行复制-移动伪造检测(CMFD),必然会导致检测精度降低。因此,为了明显提高CMFD的性能,本文提出了一种以扩散模型为主导,结合复制-移动伪造特征的目标检测网络DCM-Net。DCM-Net作为第一个基于扩散模型的CMFD网络,有以下三点改进。首先,高相似度盒子填充策略将高相似度盒子填充到真实盒子中,而不是扩散模型中使用的随机盒子,从而更好地引导后续双注意检测头(ddh)更多地关注高相似区域。其次,不同于以往基于深度学习的CMFD网络利用自相关计算将特征提取中提取的所有分类特征不加区分地转化为高相似特征,提出了一种自适应特征组合策略,以获得能够达到最佳检测性能的最优特征转换,使ddh能够更有效地区分源区域和目标区域。最后,为了使探测头能够更准确地定位和区分源/目标,提出了配备高效多尺度关注和上下文转换器的ddh,以生成融合整个精确空间位置信息和丰富的上下文全局信息的篡改特征。在USC-ISI、CoMoFoD和COVERAGE三个公开数据集上进行的实验结果表明,DCM-Net在相似性检测能力和源/目标区分能力方面优于几种先进的算法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
IEEE Transactions on Multimedia
IEEE Transactions on Multimedia 工程技术-电信学
CiteScore
11.70
自引率
11.00%
发文量
576
审稿时长
5.5 months
期刊介绍: The IEEE Transactions on Multimedia delves into diverse aspects of multimedia technology and applications, covering circuits, networking, signal processing, systems, software, and systems integration. The scope aligns with the Fields of Interest of the sponsors, ensuring a comprehensive exploration of research in multimedia.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信