Patch Replacement: A Transformation-based Method to Improve Robustness against Adversarial Attacks

Hanwei Zhang, Yannis Avrithis, T. Furon, L. Amsaleg
{"title":"Patch Replacement: A Transformation-based Method to Improve Robustness against Adversarial Attacks","authors":"Hanwei Zhang, Yannis Avrithis, T. Furon, L. Amsaleg","doi":"10.1145/3475731.3484955","DOIUrl":null,"url":null,"abstract":"Deep Neural Networks (DNNs) are robust against intra-class variability of images, pose variations and random noise, but vulnerable to imperceptible adversarial perturbations that are well-crafted precisely to mislead. While random noise even of relatively large magnitude can hardly affect predictions, adversarial perturbations of very small magnitude can make a classifier fail completely. To enhance robustness, we introduce a new adversarial defense called patch replacement, which transforms both the input images and their intermediate features at early layers to make adversarial perturbations behave similarly to random noise. We decompose images/features into small patches and quantize them according to a codebook learned from legitimate training images. This maintains the semantic information of legitimate images, while removing as much as possible the effect of adversarial perturbations. Experiments show that patch replacement improves robustness against both white-box and gray-box attacks, compared with other transformation-based defenses. It has a low computational cost since it does not need training or fine-tuning the network. Importantly, in the white-box scenario, it increases the robustness, while other transformation-based defenses do not.","PeriodicalId":355632,"journal":{"name":"Proceedings of the 1st International Workshop on Trustworthy AI for Multimedia Computing","volume":"50 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 1st International Workshop on Trustworthy AI for Multimedia Computing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3475731.3484955","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

Abstract

Deep Neural Networks (DNNs) are robust against intra-class variability of images, pose variations and random noise, but vulnerable to imperceptible adversarial perturbations that are well-crafted precisely to mislead. While random noise even of relatively large magnitude can hardly affect predictions, adversarial perturbations of very small magnitude can make a classifier fail completely. To enhance robustness, we introduce a new adversarial defense called patch replacement, which transforms both the input images and their intermediate features at early layers to make adversarial perturbations behave similarly to random noise. We decompose images/features into small patches and quantize them according to a codebook learned from legitimate training images. This maintains the semantic information of legitimate images, while removing as much as possible the effect of adversarial perturbations. Experiments show that patch replacement improves robustness against both white-box and gray-box attacks, compared with other transformation-based defenses. It has a low computational cost since it does not need training or fine-tuning the network. Importantly, in the white-box scenario, it increases the robustness, while other transformation-based defenses do not.
补丁替换:一种基于转换的方法来提高对对抗性攻击的鲁棒性
深度神经网络(dnn)对图像的类内变异性、姿态变化和随机噪声具有鲁棒性,但容易受到难以察觉的对抗性扰动的影响,这些扰动被精心设计以精确误导。即使是相对较大的随机噪声也几乎不会影响预测,但非常小的对抗性扰动可能会使分类器完全失败。为了增强鲁棒性,我们引入了一种新的对抗防御,称为补丁替换,它在早期层转换输入图像及其中间特征,使对抗扰动的行为类似于随机噪声。我们将图像/特征分解成小块,并根据从合法训练图像中学习的代码本将其量化。这保持了合法图像的语义信息,同时尽可能地消除对抗性扰动的影响。实验表明,与其他基于转换的防御相比,补丁替换提高了对白盒和灰盒攻击的鲁棒性。它的计算成本很低,因为它不需要训练或微调网络。重要的是,在白盒场景中,它增加了健壮性,而其他基于转换的防御则没有。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信