{"title":"MC-FGSM: Black-box Adversarial Attack for Deep Learning System","authors":"Wenqiang Zheng, Yanfang Li","doi":"10.1109/ISSREW53611.2021.00058","DOIUrl":null,"url":null,"abstract":"Deep learning (DL) technology has been widely applied in the safety-critical area, for instance, autopilot system in which the misbehavior will have a huge influence. Hence the reliability of DL system should be tested thoroughly. DL reliability testing is mainly achieved via adversarial attack, however, the existing attack methods lack mathematical proof whether the convergence of the attack can be guaranteed. This paper proposes a novel adversarial attack method, i.e., Monte Carlo-Fast Gradient Sign Method (MC-FGSM) to test the DL robustness. This method does not require any knowledge of the victim DL system. Specifically, this method first approximates the gradient of the input variable via Monte Carlo sampling technique, and then the gradient-based method is applied to generate adversarial attacks. Moreover, a strict mathematical proof has shown the gradient estimation is unbiased and the time complexity is $\\boldsymbol{O}(1)$, while the existing method is $\\boldsymbol{O}(N)$. The effectiveness of the proposed method is demonstrated by numerical experiments. This method can work as the reliability evaluation tool of the autopilot system.","PeriodicalId":385392,"journal":{"name":"2021 IEEE International Symposium on Software Reliability Engineering Workshops (ISSREW)","volume":"70 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE International Symposium on Software Reliability Engineering Workshops (ISSREW)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ISSREW53611.2021.00058","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
Deep learning (DL) technology has been widely applied in the safety-critical area, for instance, autopilot system in which the misbehavior will have a huge influence. Hence the reliability of DL system should be tested thoroughly. DL reliability testing is mainly achieved via adversarial attack, however, the existing attack methods lack mathematical proof whether the convergence of the attack can be guaranteed. This paper proposes a novel adversarial attack method, i.e., Monte Carlo-Fast Gradient Sign Method (MC-FGSM) to test the DL robustness. This method does not require any knowledge of the victim DL system. Specifically, this method first approximates the gradient of the input variable via Monte Carlo sampling technique, and then the gradient-based method is applied to generate adversarial attacks. Moreover, a strict mathematical proof has shown the gradient estimation is unbiased and the time complexity is $\boldsymbol{O}(1)$, while the existing method is $\boldsymbol{O}(N)$. The effectiveness of the proposed method is demonstrated by numerical experiments. This method can work as the reliability evaluation tool of the autopilot system.