Satish Kumar, C. Krishna, Sahil Khattar, Raj Kumar Tickoo
{"title":"Adversarial Attack to Deceive One Stage Object Detection Algorithms","authors":"Satish Kumar, C. Krishna, Sahil Khattar, Raj Kumar Tickoo","doi":"10.1145/3590837.3590873","DOIUrl":null,"url":null,"abstract":"In this paper, we are focusing to fool one stage object detection algorithms and propose a black box method to generate adversarial example such that it is imperceptible to human eyes and can fool one stage object detectors. We are generating random perturbation and scaling it on the basis of unsuccessful attack and maximum number of iterations. The generated perturbation is added to original image to generate perturbed image. After that the output of one-stage object detector is compared. We have defined three success scenarios, hiding objects, misclassification and count of objects, if attack achieved one of these scenarios, it will be considered as successful attack. The proposed work is evaluated on the basis of perceptibility, average number of iterations and convergence rate. The results show that we have achieved 98.05% convergence rate on 4.7 average number of iteration with PASS score of 1.94*10-2 on RetinaNet, 98.73% convergence rate on 4.68 average number of iteration with PASS score of 1.58*10-2 on Single Shot multi-box Detection (SSD) and 77.11% convergence rate on 6.08 average number of iteration with PASS score of 2.04*10-2 on You Look Only Once version 3 (YOLO V3) which shows the effectiveness of proposed attack.","PeriodicalId":112926,"journal":{"name":"Proceedings of the 4th International Conference on Information Management & Machine Intelligence","volume":"74 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-12-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 4th International Conference on Information Management & Machine Intelligence","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3590837.3590873","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
In this paper, we are focusing to fool one stage object detection algorithms and propose a black box method to generate adversarial example such that it is imperceptible to human eyes and can fool one stage object detectors. We are generating random perturbation and scaling it on the basis of unsuccessful attack and maximum number of iterations. The generated perturbation is added to original image to generate perturbed image. After that the output of one-stage object detector is compared. We have defined three success scenarios, hiding objects, misclassification and count of objects, if attack achieved one of these scenarios, it will be considered as successful attack. The proposed work is evaluated on the basis of perceptibility, average number of iterations and convergence rate. The results show that we have achieved 98.05% convergence rate on 4.7 average number of iteration with PASS score of 1.94*10-2 on RetinaNet, 98.73% convergence rate on 4.68 average number of iteration with PASS score of 1.58*10-2 on Single Shot multi-box Detection (SSD) and 77.11% convergence rate on 6.08 average number of iteration with PASS score of 2.04*10-2 on You Look Only Once version 3 (YOLO V3) which shows the effectiveness of proposed attack.
本文主要针对欺骗一阶段目标检测算法,提出了一种黑盒方法来生成人眼无法察觉的对抗样例,可以欺骗一阶段目标检测器。我们生成随机扰动,并根据不成功的攻击和最大迭代次数对其进行缩放。将生成的扰动加到原始图像上,生成扰动图像。然后比较了单级目标检测器的输出。我们定义了隐藏对象、错误分类和对象计数三种成功场景,如果攻击实现了其中一种场景,则视为攻击成功。根据可感知性、平均迭代次数和收敛速度对所提出的工作进行了评估。结果表明,我们在retanet上进行4.7次平均迭代,PASS分数为1.94*10-2,达到了98.05%的收敛率;在单次多盒检测(SSD)上进行4.68次平均迭代,PASS分数为1.58*10-2,达到了98.73%的收敛率;在You Look Only one version 3 (YOLO V3)上进行6.08次平均迭代,PASS分数为2.04*10-2,达到了77.11%的收敛率,表明了所提出攻击的有效性。