Jeong-Soo Kim, Kyungmin Lee, Hyeongkeun Lee, Hunmin Yang, Se-Yoon Oh
{"title":"Camouflaged Adversarial Attack on Object Detector","authors":"Jeong-Soo Kim, Kyungmin Lee, Hyeongkeun Lee, Hunmin Yang, Se-Yoon Oh","doi":"10.23919/ICCAS52745.2021.9650004","DOIUrl":null,"url":null,"abstract":"The existence of physical-world adversarial examples such as adversarial patches proves the vulnerability of real-world deep learning systems. Therefore, it is essential to develop efficient adversarial attack algorithms to identify potential risks and build a robust system. The patch-based physical adversarial attack has shown its effectiveness in attacking neural network-based object detectors. However, the generated patches are quite perceptible for humans, violating the fundamental assumption of adversarial examples. In this work, we present task-specific loss functions that can generate imperceptible adversarial patches based on camouflaged patterns. First, we propose a constrained optimization method with two camouflage assessment metrics to quantify camouflage performance. Then, we show the regularization with those metrics can help generate the adversarial patches based on camouflage patterns. Furthermore, we validate our methods with various experiments and show that we can generate natural-style camouflaged adversarial patches with comparable attack performance.","PeriodicalId":411064,"journal":{"name":"2021 21st International Conference on Control, Automation and Systems (ICCAS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-10-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 21st International Conference on Control, Automation and Systems (ICCAS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.23919/ICCAS52745.2021.9650004","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
The existence of physical-world adversarial examples such as adversarial patches proves the vulnerability of real-world deep learning systems. Therefore, it is essential to develop efficient adversarial attack algorithms to identify potential risks and build a robust system. The patch-based physical adversarial attack has shown its effectiveness in attacking neural network-based object detectors. However, the generated patches are quite perceptible for humans, violating the fundamental assumption of adversarial examples. In this work, we present task-specific loss functions that can generate imperceptible adversarial patches based on camouflaged patterns. First, we propose a constrained optimization method with two camouflage assessment metrics to quantify camouflage performance. Then, we show the regularization with those metrics can help generate the adversarial patches based on camouflage patterns. Furthermore, we validate our methods with various experiments and show that we can generate natural-style camouflaged adversarial patches with comparable attack performance.