SpotOn:基于梯度的深度神经网络目标数据中毒攻击

Yash Khare, Kumud Lakara, Sparsh Mittal, Arvind Kaushik, Rekha Singhal
{"title":"SpotOn:基于梯度的深度神经网络目标数据中毒攻击","authors":"Yash Khare, Kumud Lakara, Sparsh Mittal, Arvind Kaushik, Rekha Singhal","doi":"10.1109/ISQED57927.2023.10129311","DOIUrl":null,"url":null,"abstract":"Deep neural networks (DNNs) are vulnerable to adversarial inputs, which are created by adding minor perturbations to the genuine inputs. Previous gradient-based adversarial attacks, such as the \"fast gradient sign method\" (FGSM), add an equal amount (say ϵ) of noise to all the pixels of an image. This degrades image quality significantly, such that a human validator can easily detect the resultant adversarial samples. We propose a novel gradient-based adversarial attack technique named SpotOn, which seeks to maintain the quality of adversarial images high. We first identify an image’s region of importance (ROI) using Grad-CAM. SpotOn has three variants. Two variants of SpotOn attack only the ROI, whereas the third variant adds an epsilon (ϵ) amount of noise to the ROI and a much smaller amount of noise (say ϵ/3) to the remaining image. On Caltech101 dataset, compared to FGSM, SpotOn achieves comparable degradation in CNN accuracy while maintaining much higher image quality. For example, for ϵ = 0.1, FGSM degrades VGG19 accuracy from 92% to 8% and leads to an SSIM value of 0.48 by attacking all pixels in an image. By contrast, SpotOn-VariableNoise attacks only 34.8% of the pixels in the image; degrades accuracy to 10.5% and maintains an SSIM value of 0.78. This makes SpotOn an effective data-poisoning attack technique. The code is available from https://github.com/CandleLabAI/SpotOn-AttackOnDNNs.","PeriodicalId":315053,"journal":{"name":"2023 24th International Symposium on Quality Electronic Design (ISQED)","volume":"61 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-04-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"SpotOn: A Gradient-based Targeted Data Poisoning Attack on Deep Neural Networks\",\"authors\":\"Yash Khare, Kumud Lakara, Sparsh Mittal, Arvind Kaushik, Rekha Singhal\",\"doi\":\"10.1109/ISQED57927.2023.10129311\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Deep neural networks (DNNs) are vulnerable to adversarial inputs, which are created by adding minor perturbations to the genuine inputs. Previous gradient-based adversarial attacks, such as the \\\"fast gradient sign method\\\" (FGSM), add an equal amount (say ϵ) of noise to all the pixels of an image. This degrades image quality significantly, such that a human validator can easily detect the resultant adversarial samples. We propose a novel gradient-based adversarial attack technique named SpotOn, which seeks to maintain the quality of adversarial images high. We first identify an image’s region of importance (ROI) using Grad-CAM. SpotOn has three variants. Two variants of SpotOn attack only the ROI, whereas the third variant adds an epsilon (ϵ) amount of noise to the ROI and a much smaller amount of noise (say ϵ/3) to the remaining image. On Caltech101 dataset, compared to FGSM, SpotOn achieves comparable degradation in CNN accuracy while maintaining much higher image quality. For example, for ϵ = 0.1, FGSM degrades VGG19 accuracy from 92% to 8% and leads to an SSIM value of 0.48 by attacking all pixels in an image. By contrast, SpotOn-VariableNoise attacks only 34.8% of the pixels in the image; degrades accuracy to 10.5% and maintains an SSIM value of 0.78. This makes SpotOn an effective data-poisoning attack technique. The code is available from https://github.com/CandleLabAI/SpotOn-AttackOnDNNs.\",\"PeriodicalId\":315053,\"journal\":{\"name\":\"2023 24th International Symposium on Quality Electronic Design (ISQED)\",\"volume\":\"61 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-04-05\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2023 24th International Symposium on Quality Electronic Design (ISQED)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ISQED57927.2023.10129311\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 24th International Symposium on Quality Electronic Design (ISQED)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ISQED57927.2023.10129311","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

深度神经网络(dnn)容易受到对抗性输入的影响,对抗性输入是通过在真实输入中添加微小的扰动而产生的。之前基于梯度的对抗性攻击,比如“快速梯度符号法”(FGSM),在图像的所有像素上添加等量的噪声(比如ε)。这大大降低了图像质量,使得人类验证器可以很容易地检测到产生的对抗性样本。我们提出了一种新的基于梯度的对抗攻击技术,称为SpotOn,它旨在保持对抗图像的高质量。我们首先使用Grad-CAM识别图像的重要区域(ROI)。斯波顿有三种变体。SpotOn的两种变体仅攻击ROI,而第三种变体向ROI添加了一个ε (λ)的噪声量,并向剩余图像添加了更少的噪声量(例如λ /3)。在Caltech101数据集上,与FGSM相比,SpotOn在保持更高图像质量的同时实现了CNN精度的相当下降。例如,当ε = 0.1时,FGSM通过攻击图像中的所有像素,将VGG19的精度从92%降低到8%,并导致SSIM值为0.48。相比之下,spot - variable noise只攻击了图像中34.8%的像素;将精度降低到10.5%,并保持0.78的SSIM值。这使得SpotOn成为一种有效的数据中毒攻击技术。该代码可从https://github.com/CandleLabAI/SpotOn-AttackOnDNNs获得。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
SpotOn: A Gradient-based Targeted Data Poisoning Attack on Deep Neural Networks
Deep neural networks (DNNs) are vulnerable to adversarial inputs, which are created by adding minor perturbations to the genuine inputs. Previous gradient-based adversarial attacks, such as the "fast gradient sign method" (FGSM), add an equal amount (say ϵ) of noise to all the pixels of an image. This degrades image quality significantly, such that a human validator can easily detect the resultant adversarial samples. We propose a novel gradient-based adversarial attack technique named SpotOn, which seeks to maintain the quality of adversarial images high. We first identify an image’s region of importance (ROI) using Grad-CAM. SpotOn has three variants. Two variants of SpotOn attack only the ROI, whereas the third variant adds an epsilon (ϵ) amount of noise to the ROI and a much smaller amount of noise (say ϵ/3) to the remaining image. On Caltech101 dataset, compared to FGSM, SpotOn achieves comparable degradation in CNN accuracy while maintaining much higher image quality. For example, for ϵ = 0.1, FGSM degrades VGG19 accuracy from 92% to 8% and leads to an SSIM value of 0.48 by attacking all pixels in an image. By contrast, SpotOn-VariableNoise attacks only 34.8% of the pixels in the image; degrades accuracy to 10.5% and maintains an SSIM value of 0.78. This makes SpotOn an effective data-poisoning attack technique. The code is available from https://github.com/CandleLabAI/SpotOn-AttackOnDNNs.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信