{"title":"A Context-aware Black-box Adversarial Attack for Deep Driving Maneuver Classification Models","authors":"Ankur Sarker, Haiying Shen, Tanmoy Sen","doi":"10.1109/SECON52354.2021.9491584","DOIUrl":null,"url":null,"abstract":"In a connected autonomous vehicle (CAV) scenario, each vehicle utilizes an onboard deep neural network (DNN) model to understand its received time-series driving signals (e.g., speed, brake status) from its nearby vehicles, and then takes necessary actions to increase traffic safety and roadway efficiency. In the scenario, it is plausible that an attacker may launch an adversarial attack, in which the attacker adds unnoticeable perturbation to the actual driving signals to fool the DNN model inside a victim vehicle to output a misclassified class to cause traffic congestion and/or accidents. Such an attack must be generated in near real-time and the adversarial maneuver must be consistent with the current traffic context. However, previously proposed adversarial attacks fail to meet these requirements. To handle these challenges, in this paper, we propose a Context- aware Black-box Adversarial Attack (CBAA) for time-series DNN models in CAV scenarios. By analyzing real driving datasets, we observe that specific driving signals at certain time points have a higher impact on the DNN output. These influential spatio-temporal factors differ in different traffic contexts (a combination of different traffic factors (e.g., congestion, slope, and curvature)). Thus, CBAA first generates the perturbation only on the influential spatio-temporal signals for each context offline. In generating an attack online, CBAA uses the offline perturbation for the current context to start searching the minimum perturbation using the zeroth-order gradient descent method that will lead to the misclassification. Limiting the spatio-temporal searching scope with the constraint of context greatly expedites finding the final perturbation. Our extensive experimental studies using two different real driving datasets show that CBAA requires 43% fewer queries (to the DNN model to verify the attack success) and 53% less time than existing adversarial attacks.","PeriodicalId":120945,"journal":{"name":"2021 18th Annual IEEE International Conference on Sensing, Communication, and Networking (SECON)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 18th Annual IEEE International Conference on Sensing, Communication, and Networking (SECON)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SECON52354.2021.9491584","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3
Abstract
In a connected autonomous vehicle (CAV) scenario, each vehicle utilizes an onboard deep neural network (DNN) model to understand its received time-series driving signals (e.g., speed, brake status) from its nearby vehicles, and then takes necessary actions to increase traffic safety and roadway efficiency. In the scenario, it is plausible that an attacker may launch an adversarial attack, in which the attacker adds unnoticeable perturbation to the actual driving signals to fool the DNN model inside a victim vehicle to output a misclassified class to cause traffic congestion and/or accidents. Such an attack must be generated in near real-time and the adversarial maneuver must be consistent with the current traffic context. However, previously proposed adversarial attacks fail to meet these requirements. To handle these challenges, in this paper, we propose a Context- aware Black-box Adversarial Attack (CBAA) for time-series DNN models in CAV scenarios. By analyzing real driving datasets, we observe that specific driving signals at certain time points have a higher impact on the DNN output. These influential spatio-temporal factors differ in different traffic contexts (a combination of different traffic factors (e.g., congestion, slope, and curvature)). Thus, CBAA first generates the perturbation only on the influential spatio-temporal signals for each context offline. In generating an attack online, CBAA uses the offline perturbation for the current context to start searching the minimum perturbation using the zeroth-order gradient descent method that will lead to the misclassification. Limiting the spatio-temporal searching scope with the constraint of context greatly expedites finding the final perturbation. Our extensive experimental studies using two different real driving datasets show that CBAA requires 43% fewer queries (to the DNN model to verify the attack success) and 53% less time than existing adversarial attacks.