A method to generate adversarial examples based on color variety of adjacent pixels

Tomoki Kamegawa, Masaomi Kimura, Imam Mukhlash, Mohammad Iqbal
{"title":"A method to generate adversarial examples based on color variety of adjacent pixels","authors":"Tomoki Kamegawa, Masaomi Kimura, Imam Mukhlash, Mohammad Iqbal","doi":"10.54941/ahfe1004184","DOIUrl":null,"url":null,"abstract":"Deep neural networks have improved the performance of large-scale learning tasks such as image recognition and speech recognition. However, neural networks also have vulnerabilities. Adversarial examples are generated by adding perturbations to images to cause incorrect predictions of image classifiers. The well-known perturbation attack is JSMA, which is relatively fast to generate perturbation and requires only simple procedures and is widely used in cybersecurity, anomaly detection and intrusion detection. However, there are problems with the way to perturb pixels. JSMA’s perturbations are easily perceivable by the human eyes because JSAM adds large perturbations to pixels. Some previous methods to generate adversarial examples did not assume that adversarial examples are checked by human eyes and allow larger perturbation to be adding to a single pixel. However, in situations where a deep learning model causes significant damage if it misrecognizes an input, a visual check by a human is necessary. In such cases, adversarial examples should not only cause misclassification in the image classifier system but also require less perturbation to avoid human perception of the perturbation. We propose methods to improve the JSMA problems. Specifically, it adjusts the amount of perturbation by calculating the variance between the value of the pixel to be perturbed and its surrounding pixels. If a large perturbation is added to the area of an image with a large pixel value variation, the perturbation will be imperceptible. In such case, perceivability does not increase significantly with a slightly larger perturbation. In contrast, if the large perturbation is added to the area of an image with small pixel value variation, the perturbation will be more perceptible. In such case, perturbations must be small. In our previous study, we assumed thresholds to classify the perturbations into two classes, large perturbation and small perturbation. If the variance was larger than the threshold, a larger perturbation was added; if the variance was smaller than the threshold, a smaller perturbation was added, which achieved a reduction in the amount of perturbation. However, there were still rooms of improvements of the perturbation to reduce the perceptibility. In this study, we focused on that there were differences in the perception of perturbations depending on the color of the pixel. The amount of perturbation should vary from pixel to pixel, not a fixed amount. Not only the variance of the surrounding pixels but also the variance of a larger area is calculated. By using these ratios, the amount of perturbation is varied from pixel to pixel. Experimental results using cifar-10 showed that the proposed method reduced the amount of perturbation to pixels with a misclassification success rate comparable to that of JSMA and our past method. We also confirmed that the reduced perturbation made the perturbation less perceptible.","PeriodicalId":470195,"journal":{"name":"AHFE international","volume":"127 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"AHFE international","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.54941/ahfe1004184","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Deep neural networks have improved the performance of large-scale learning tasks such as image recognition and speech recognition. However, neural networks also have vulnerabilities. Adversarial examples are generated by adding perturbations to images to cause incorrect predictions of image classifiers. The well-known perturbation attack is JSMA, which is relatively fast to generate perturbation and requires only simple procedures and is widely used in cybersecurity, anomaly detection and intrusion detection. However, there are problems with the way to perturb pixels. JSMA’s perturbations are easily perceivable by the human eyes because JSAM adds large perturbations to pixels. Some previous methods to generate adversarial examples did not assume that adversarial examples are checked by human eyes and allow larger perturbation to be adding to a single pixel. However, in situations where a deep learning model causes significant damage if it misrecognizes an input, a visual check by a human is necessary. In such cases, adversarial examples should not only cause misclassification in the image classifier system but also require less perturbation to avoid human perception of the perturbation. We propose methods to improve the JSMA problems. Specifically, it adjusts the amount of perturbation by calculating the variance between the value of the pixel to be perturbed and its surrounding pixels. If a large perturbation is added to the area of an image with a large pixel value variation, the perturbation will be imperceptible. In such case, perceivability does not increase significantly with a slightly larger perturbation. In contrast, if the large perturbation is added to the area of an image with small pixel value variation, the perturbation will be more perceptible. In such case, perturbations must be small. In our previous study, we assumed thresholds to classify the perturbations into two classes, large perturbation and small perturbation. If the variance was larger than the threshold, a larger perturbation was added; if the variance was smaller than the threshold, a smaller perturbation was added, which achieved a reduction in the amount of perturbation. However, there were still rooms of improvements of the perturbation to reduce the perceptibility. In this study, we focused on that there were differences in the perception of perturbations depending on the color of the pixel. The amount of perturbation should vary from pixel to pixel, not a fixed amount. Not only the variance of the surrounding pixels but also the variance of a larger area is calculated. By using these ratios, the amount of perturbation is varied from pixel to pixel. Experimental results using cifar-10 showed that the proposed method reduced the amount of perturbation to pixels with a misclassification success rate comparable to that of JSMA and our past method. We also confirmed that the reduced perturbation made the perturbation less perceptible.
一种基于相邻像素颜色变化生成对抗样例的方法
深度神经网络提高了大规模学习任务的性能,如图像识别和语音识别。然而,神经网络也有漏洞。对抗性示例是通过向图像添加扰动来产生的,从而导致图像分类器的错误预测。众所周知的摄动攻击是JSMA,它产生摄动的速度相对较快,程序简单,广泛应用于网络安全、异常检测和入侵检测。然而,在扰动像素的方式上存在一些问题。JSMA的扰动很容易被人眼感知,因为JSAM在像素上添加了较大的扰动。以前的一些生成对抗性示例的方法没有假设对抗性示例是由人眼检查的,并且允许在单个像素上添加更大的扰动。然而,在深度学习模型如果错误识别输入会造成重大损害的情况下,人类的视觉检查是必要的。在这种情况下,对抗样例不仅应该在图像分类器系统中造成误分类,而且需要较少的扰动以避免人类感知扰动。我们提出了改进JSMA问题的方法。具体来说,它通过计算被扰动像素值与其周围像素值之间的方差来调整扰动量。如果在像素值变化较大的图像区域中加入较大的扰动,则该扰动将难以察觉。在这种情况下,微扰稍大,可感知性不会显著增加。相反,如果在像素值变化较小的图像区域中加入较大的扰动,则扰动将更加明显。在这种情况下,扰动必须很小。在我们之前的研究中,我们假设阈值将扰动分为两类,大扰动和小扰动。如果方差大于阈值,则添加更大的扰动;如果方差小于阈值,则添加较小的扰动,从而减少扰动量。然而,仍然有改进的余地,以减少扰动的可感知性。在这项研究中,我们关注的是,根据像素的颜色,对扰动的感知存在差异。摄动的量应该随着像素的不同而变化,而不是固定的量。不仅计算周围像素的方差,还计算更大区域的方差。通过使用这些比率,摄动量从像素到像素是不同的。cifar-10的实验结果表明,该方法减少了对像素的扰动量,误分类成功率与JSMA和我们过去的方法相当。我们还证实,减少的扰动使扰动更不易察觉。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信