保留空间信息提高图像伪造分类性能

Hanh Phan-Xuan, T. Le-Tien, Thuy Nguyen-Chinh, Thien Do-Tieu, Qui Nguyen-Van, Tuan Nguyen-Thanh
{"title":"保留空间信息提高图像伪造分类性能","authors":"Hanh Phan-Xuan, T. Le-Tien, Thuy Nguyen-Chinh, Thien Do-Tieu, Qui Nguyen-Van, Tuan Nguyen-Thanh","doi":"10.1109/ATC.2019.8924504","DOIUrl":null,"url":null,"abstract":"As there are a huge range of powerful tools to edit images now, the need for verifying the authentication of images is more urgent than ever. While forgery methods are increasingly more subtle that even human vision seems hard to recognize these manipulations, conventional algorithms, which try to detect tampering traces, often pre-define assumptions that limit the scope of problem. Therefore, such methods are unable to encounter forgery methods in general applications. In this paper, we propose a framework that utilizes Deep Learning techniques to detect tampered images. Concretely, the MobileNetV2 network in [21] is modified so that it can be consistent to the task of image forgery detection. We argue that by remaining spatial dimension of early layers, the model is likely to learn rich features in these layers, and then following layers are to abstract these rich features for making a decision whether an image is forged. Besides, we also conduct a comprehensive experiment to prove those arguments. Experimental results show that the architecture-modified network achieves a remarkable accuracy of 95.15%, which surpasses others relying on the original architecture by a large margin up to 12.09%.","PeriodicalId":409591,"journal":{"name":"2019 International Conference on Advanced Technologies for Communications (ATC)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":"{\"title\":\"Preserving Spatial Information to Enhance Performance of Image Forgery Classification\",\"authors\":\"Hanh Phan-Xuan, T. Le-Tien, Thuy Nguyen-Chinh, Thien Do-Tieu, Qui Nguyen-Van, Tuan Nguyen-Thanh\",\"doi\":\"10.1109/ATC.2019.8924504\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"As there are a huge range of powerful tools to edit images now, the need for verifying the authentication of images is more urgent than ever. While forgery methods are increasingly more subtle that even human vision seems hard to recognize these manipulations, conventional algorithms, which try to detect tampering traces, often pre-define assumptions that limit the scope of problem. Therefore, such methods are unable to encounter forgery methods in general applications. In this paper, we propose a framework that utilizes Deep Learning techniques to detect tampered images. Concretely, the MobileNetV2 network in [21] is modified so that it can be consistent to the task of image forgery detection. We argue that by remaining spatial dimension of early layers, the model is likely to learn rich features in these layers, and then following layers are to abstract these rich features for making a decision whether an image is forged. Besides, we also conduct a comprehensive experiment to prove those arguments. Experimental results show that the architecture-modified network achieves a remarkable accuracy of 95.15%, which surpasses others relying on the original architecture by a large margin up to 12.09%.\",\"PeriodicalId\":409591,\"journal\":{\"name\":\"2019 International Conference on Advanced Technologies for Communications (ATC)\",\"volume\":\"4 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2019-10-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"4\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2019 International Conference on Advanced Technologies for Communications (ATC)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ATC.2019.8924504\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 International Conference on Advanced Technologies for Communications (ATC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ATC.2019.8924504","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 4

摘要

由于现在有大量功能强大的工具来编辑图像,因此对图像身份验证的需求比以往任何时候都更加迫切。虽然伪造方法越来越微妙,甚至人类的视觉似乎也很难识别这些操作,但传统的算法,试图检测篡改痕迹,往往预先定义假设,限制了问题的范围。因此,这种方法在一般应用中无法遇到伪造方法。在本文中,我们提出了一个利用深度学习技术检测篡改图像的框架。具体而言,对[21]中的MobileNetV2网络进行了修改,使其能够与图像伪造检测任务相一致。我们认为,通过保留早期层的空间维度,模型很可能学习到这些层中的丰富特征,然后接下来的层将抽象这些丰富的特征,以决定图像是否伪造。此外,我们还进行了全面的实验来证明这些论点。实验结果表明,改进后的网络准确率达到了95.15%,大大超过了其他依赖原始架构的网络,准确率高达12.09%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Preserving Spatial Information to Enhance Performance of Image Forgery Classification
As there are a huge range of powerful tools to edit images now, the need for verifying the authentication of images is more urgent than ever. While forgery methods are increasingly more subtle that even human vision seems hard to recognize these manipulations, conventional algorithms, which try to detect tampering traces, often pre-define assumptions that limit the scope of problem. Therefore, such methods are unable to encounter forgery methods in general applications. In this paper, we propose a framework that utilizes Deep Learning techniques to detect tampered images. Concretely, the MobileNetV2 network in [21] is modified so that it can be consistent to the task of image forgery detection. We argue that by remaining spatial dimension of early layers, the model is likely to learn rich features in these layers, and then following layers are to abstract these rich features for making a decision whether an image is forged. Besides, we also conduct a comprehensive experiment to prove those arguments. Experimental results show that the architecture-modified network achieves a remarkable accuracy of 95.15%, which surpasses others relying on the original architecture by a large margin up to 12.09%.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信