DCU-Net: a dual-channel U-shaped network for image splicing forgery detection.

IF 4.5 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Hongwei Ding, Leiyang Chen, Qi Tao, Zhongwang Fu, Liang Dong, Xiaohui Cui
{"title":"DCU-Net: a dual-channel U-shaped network for image splicing forgery detection.","authors":"Hongwei Ding,&nbsp;Leiyang Chen,&nbsp;Qi Tao,&nbsp;Zhongwang Fu,&nbsp;Liang Dong,&nbsp;Xiaohui Cui","doi":"10.1007/s00521-021-06329-4","DOIUrl":null,"url":null,"abstract":"<p><p>The detection and location of image splicing forgery are a challenging task in the field of image forensics. It is to study whether an image contains a suspicious tampered area pasted from another image. In this paper, we propose a new image tamper location method based on dual-channel U-Net, that is, DCU-Net. The detection framework based on DCU-Net is mainly divided into three parts: encoder, feature fusion, and decoder. Firstly, high-pass filters are used to extract the residual of the tampered image and generate the residual image, which contains the edge information of the tampered area. Secondly, a dual-channel encoding network model is constructed. The input of the model is the original tampered image and the tampered residual image. Then, the deep features extracted from the dual-channel encoding network are fused for the first time, and then the tampered features with different granularity are extracted by dilation convolution, and then, the secondary fusion is carried out. Finally, the fused feature map is input into the decoder, and the predicted image is decoded layer by layer. The experimental results on Casia2.0 and Columbia datasets show that DCU-Net performs better than the latest algorithm and can accurately locate tampered areas. In addition, the attack experiments show that DCU-Net model has good robustness and can resist noise and JPEG recompression attacks.</p>","PeriodicalId":49766,"journal":{"name":"Neural Computing & Applications","volume":"35 7","pages":"5015-5031"},"PeriodicalIF":4.5000,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1007/s00521-021-06329-4","citationCount":"17","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neural Computing & Applications","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1007/s00521-021-06329-4","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 17

Abstract

The detection and location of image splicing forgery are a challenging task in the field of image forensics. It is to study whether an image contains a suspicious tampered area pasted from another image. In this paper, we propose a new image tamper location method based on dual-channel U-Net, that is, DCU-Net. The detection framework based on DCU-Net is mainly divided into three parts: encoder, feature fusion, and decoder. Firstly, high-pass filters are used to extract the residual of the tampered image and generate the residual image, which contains the edge information of the tampered area. Secondly, a dual-channel encoding network model is constructed. The input of the model is the original tampered image and the tampered residual image. Then, the deep features extracted from the dual-channel encoding network are fused for the first time, and then the tampered features with different granularity are extracted by dilation convolution, and then, the secondary fusion is carried out. Finally, the fused feature map is input into the decoder, and the predicted image is decoded layer by layer. The experimental results on Casia2.0 and Columbia datasets show that DCU-Net performs better than the latest algorithm and can accurately locate tampered areas. In addition, the attack experiments show that DCU-Net model has good robustness and can resist noise and JPEG recompression attacks.

DCU-Net:用于图像拼接伪造检测的双通道u型网络。
图像拼接伪造的检测与定位是图像取证领域的一项具有挑战性的任务。它是研究一幅图像是否包含从另一幅图像粘贴的可疑篡改区域。本文提出了一种新的基于双通道U-Net的图像篡改定位方法,即DCU-Net。基于DCU-Net的检测框架主要分为三部分:编码器、特征融合和解码器。首先,利用高通滤波器提取篡改图像的残差,生成包含篡改区域边缘信息的残差图像;其次,构建了双通道编码网络模型。模型的输入是原始篡改图像和篡改后的残差图像。首先对双通道编码网络中提取的深度特征进行融合,然后对不同粒度的篡改特征进行膨胀卷积提取,再进行二次融合。最后,将融合后的特征映射输入到解码器中,对预测图像进行逐层解码。在Casia2.0和Columbia数据集上的实验结果表明,DCU-Net算法的性能优于最新算法,能够准确定位篡改区域。此外,攻击实验表明,DCU-Net模型具有良好的鲁棒性,能够抵抗噪声和JPEG再压缩攻击。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Neural Computing & Applications
Neural Computing & Applications 工程技术-计算机:人工智能
CiteScore
11.40
自引率
8.30%
发文量
1280
审稿时长
6.9 months
期刊介绍: Neural Computing & Applications is an international journal which publishes original research and other information in the field of practical applications of neural computing and related techniques such as genetic algorithms, fuzzy logic and neuro-fuzzy systems. All items relevant to building practical systems are within its scope, including but not limited to: -adaptive computing- algorithms- applicable neural networks theory- applied statistics- architectures- artificial intelligence- benchmarks- case histories of innovative applications- fuzzy logic- genetic algorithms- hardware implementations- hybrid intelligent systems- intelligent agents- intelligent control systems- intelligent diagnostics- intelligent forecasting- machine learning- neural networks- neuro-fuzzy systems- pattern recognition- performance measures- self-learning systems- software simulations- supervised and unsupervised learning methods- system engineering and integration. Featured contributions fall into several categories: Original Articles, Review Articles, Book Reviews and Announcements.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信