CMV2U-Net: A U-shaped network with edge-weighted features for detecting and localizing image splicing

IF 1.5 4区 医学 Q2 MEDICINE, LEGAL
Arslan Akram PhD, Muhammad Arfan Jaffar PhD, Javed Rashid PhD, Salah Mahmoud Boulaaras PhD, Muhammad Faheem PhD
{"title":"CMV2U-Net: A U-shaped network with edge-weighted features for detecting and localizing image splicing","authors":"Arslan Akram PhD,&nbsp;Muhammad Arfan Jaffar PhD,&nbsp;Javed Rashid PhD,&nbsp;Salah Mahmoud Boulaaras PhD,&nbsp;Muhammad Faheem PhD","doi":"10.1111/1556-4029.70033","DOIUrl":null,"url":null,"abstract":"<p>The practice of cutting and pasting portions of one image into another, known as “image splicing,” is commonplace in the field of image manipulation. Image splicing detection using deep learning has been a hot research topic for the past few years. However, there are two problems with the way deep learning is currently implemented: first, it is not good enough for feature fusion, and second, it uses only simple models for feature extraction and encoding, which makes the models vulnerable to overfitting. To tackle these problems, this research proposes CMV2U-Net, an edge-weighted U-shaped network-based image splicing forgery localization approach. An initial step is the development of a feature extraction module that can process two streams of input images simultaneously, allowing for the simultaneous extraction of semantically connected and semantically agnostic features. One characteristic is that a hierarchical fusion approach has been devised to prevent data loss in shallow features that are either semantically related or semantically irrelevant. This approach implements a channel attention mechanism to monitor manipulation trajectories involving multiple levels. Extensive trials on numerous public datasets prove that CMV2U-Net provides high AUC and <i>F</i><sub>1</sub> in localizing tampered regions, outperforming state-of-the-art techniques. Noise, Gaussian blur, and JPEG compression are post-processing threats that CMV2U-Net has successfully resisted.</p>","PeriodicalId":15743,"journal":{"name":"Journal of forensic sciences","volume":"70 3","pages":"1026-1043"},"PeriodicalIF":1.5000,"publicationDate":"2025-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of forensic sciences","FirstCategoryId":"3","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1111/1556-4029.70033","RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"MEDICINE, LEGAL","Score":null,"Total":0}
引用次数: 0

Abstract

The practice of cutting and pasting portions of one image into another, known as “image splicing,” is commonplace in the field of image manipulation. Image splicing detection using deep learning has been a hot research topic for the past few years. However, there are two problems with the way deep learning is currently implemented: first, it is not good enough for feature fusion, and second, it uses only simple models for feature extraction and encoding, which makes the models vulnerable to overfitting. To tackle these problems, this research proposes CMV2U-Net, an edge-weighted U-shaped network-based image splicing forgery localization approach. An initial step is the development of a feature extraction module that can process two streams of input images simultaneously, allowing for the simultaneous extraction of semantically connected and semantically agnostic features. One characteristic is that a hierarchical fusion approach has been devised to prevent data loss in shallow features that are either semantically related or semantically irrelevant. This approach implements a channel attention mechanism to monitor manipulation trajectories involving multiple levels. Extensive trials on numerous public datasets prove that CMV2U-Net provides high AUC and F1 in localizing tampered regions, outperforming state-of-the-art techniques. Noise, Gaussian blur, and JPEG compression are post-processing threats that CMV2U-Net has successfully resisted.

CMV2U-Net:带有边缘加权特征的 U 型网络,用于检测和定位图像拼接。
将一个图像的部分剪切和粘贴到另一个图像中的做法,称为“图像拼接”,在图像处理领域中很常见。基于深度学习的图像拼接检测是近年来的研究热点。然而,目前深度学习的实现方式存在两个问题:第一,它对特征融合不够好;第二,它只使用简单的模型进行特征提取和编码,这使得模型容易过度拟合。为了解决这些问题,本研究提出了一种基于边缘加权u形网络的图像拼接伪造定位方法CMV2U-Net。第一步是开发一个特征提取模块,该模块可以同时处理两个输入图像流,允许同时提取语义连接和语义不可知的特征。一个特点是设计了一种分层融合方法,以防止在语义相关或语义无关的浅层特征中丢失数据。该方法实现了一种通道注意机制来监控涉及多个层次的操作轨迹。在大量公共数据集上进行的大量试验证明,CMV2U-Net在定位篡改区域方面具有很高的AUC和F1,优于最先进的技术。噪声、高斯模糊和JPEG压缩是CMV2U-Net成功抵御的后处理威胁。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Journal of forensic sciences
Journal of forensic sciences 医学-医学:法
CiteScore
4.00
自引率
12.50%
发文量
215
审稿时长
2 months
期刊介绍: The Journal of Forensic Sciences (JFS) is the official publication of the American Academy of Forensic Sciences (AAFS). It is devoted to the publication of original investigations, observations, scholarly inquiries and reviews in various branches of the forensic sciences. These include anthropology, criminalistics, digital and multimedia sciences, engineering and applied sciences, pathology/biology, psychiatry and behavioral science, jurisprudence, odontology, questioned documents, and toxicology. Similar submissions dealing with forensic aspects of other sciences and the social sciences are also accepted, as are submissions dealing with scientifically sound emerging science disciplines. The content and/or views expressed in the JFS are not necessarily those of the AAFS, the JFS Editorial Board, the organizations with which authors are affiliated, or the publisher of JFS. All manuscript submissions are double-blind peer-reviewed.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信