A strategy creating high-resolution adversarial images against convolutional neural networks and a feasibility study on 10 CNNs

IF 2.7 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS
Franck Leprévost, A. O. Topal, Elmir Avdusinovic, Raluca Chitic
{"title":"A strategy creating high-resolution adversarial images against convolutional neural networks and a feasibility study on 10 CNNs","authors":"Franck Leprévost, A. O. Topal, Elmir Avdusinovic, Raluca Chitic","doi":"10.1080/24751839.2022.2132586","DOIUrl":null,"url":null,"abstract":"ABSTRACT To perform image recognition, Convolutional Neural Networks (CNNs) assess any image by first resizing it to its input size. In particular, high-resolution images are scaled down, say to for CNNs trained on ImageNet. So far, existing attacks, aiming at creating an adversarial image that a CNN would misclassify while a human would not notice any difference between the modified and unmodified images, proceed by creating adversarial noise in the resized domain and not in the high-resolution domain. The complexity of directly attacking high-resolution images leads to challenges in terms of speed, adversity and visual quality, making these attacks infeasible in practice. We design an indirect attack strategy that lifts to the high-resolution domain any existing attack that works efficiently in the CNN's input size domain. Adversarial noise created via this method is of the same size as the original image. We apply this approach to 10 state-of-the-art CNNs trained on ImageNet, with an evolutionary algorithm-based attack. Our method succeeded in 900 out of 1000 trials to create such adversarial images, that CNNs classify with probability in the adversarial category. Our indirect attack is the first effective method at creating adversarial images in the high-resolution domain.","PeriodicalId":32180,"journal":{"name":"Journal of Information and Telecommunication","volume":"7 1","pages":"89 - 119"},"PeriodicalIF":2.7000,"publicationDate":"2022-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Information and Telecommunication","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1080/24751839.2022.2132586","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0

Abstract

ABSTRACT To perform image recognition, Convolutional Neural Networks (CNNs) assess any image by first resizing it to its input size. In particular, high-resolution images are scaled down, say to for CNNs trained on ImageNet. So far, existing attacks, aiming at creating an adversarial image that a CNN would misclassify while a human would not notice any difference between the modified and unmodified images, proceed by creating adversarial noise in the resized domain and not in the high-resolution domain. The complexity of directly attacking high-resolution images leads to challenges in terms of speed, adversity and visual quality, making these attacks infeasible in practice. We design an indirect attack strategy that lifts to the high-resolution domain any existing attack that works efficiently in the CNN's input size domain. Adversarial noise created via this method is of the same size as the original image. We apply this approach to 10 state-of-the-art CNNs trained on ImageNet, with an evolutionary algorithm-based attack. Our method succeeded in 900 out of 1000 trials to create such adversarial images, that CNNs classify with probability in the adversarial category. Our indirect attack is the first effective method at creating adversarial images in the high-resolution domain.
一种针对卷积神经网络创建高分辨率对抗性图像的策略和10个CNN的可行性研究
为了进行图像识别,卷积神经网络(cnn)首先通过将图像大小调整到其输入大小来评估任何图像。特别是,高分辨率图像被按比例缩小,比如在ImageNet上训练的cnn。到目前为止,现有的攻击,旨在创建一个对抗的图像,CNN会错误分类,而人类不会注意到修改和未修改的图像之间的任何区别,通过在调整大小的域中而不是在高分辨率域中创建对抗噪声来进行。直接攻击高分辨率图像的复杂性导致了在速度、逆境和视觉质量方面的挑战,使得这些攻击在实践中不可行。我们设计了一种间接攻击策略,将任何在CNN输入大小域中有效的现有攻击提升到高分辨率域。通过这种方法产生的对抗噪声与原始图像的大小相同。我们将这种方法应用于在ImageNet上训练的10个最先进的cnn,并使用基于进化算法的攻击。我们的方法在1000次试验中有900次成功地创建了这样的对抗性图像,cnn在对抗性类别中进行了概率分类。我们的间接攻击是在高分辨率领域创建对抗性图像的第一种有效方法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
7.50
自引率
0.00%
发文量
18
审稿时长
27 weeks
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信