A Context-aware Black-box Adversarial Attack for Deep Driving Maneuver Classification Models

Ankur Sarker, Haiying Shen, Tanmoy Sen
{"title":"A Context-aware Black-box Adversarial Attack for Deep Driving Maneuver Classification Models","authors":"Ankur Sarker, Haiying Shen, Tanmoy Sen","doi":"10.1109/SECON52354.2021.9491584","DOIUrl":null,"url":null,"abstract":"In a connected autonomous vehicle (CAV) scenario, each vehicle utilizes an onboard deep neural network (DNN) model to understand its received time-series driving signals (e.g., speed, brake status) from its nearby vehicles, and then takes necessary actions to increase traffic safety and roadway efficiency. In the scenario, it is plausible that an attacker may launch an adversarial attack, in which the attacker adds unnoticeable perturbation to the actual driving signals to fool the DNN model inside a victim vehicle to output a misclassified class to cause traffic congestion and/or accidents. Such an attack must be generated in near real-time and the adversarial maneuver must be consistent with the current traffic context. However, previously proposed adversarial attacks fail to meet these requirements. To handle these challenges, in this paper, we propose a Context- aware Black-box Adversarial Attack (CBAA) for time-series DNN models in CAV scenarios. By analyzing real driving datasets, we observe that specific driving signals at certain time points have a higher impact on the DNN output. These influential spatio-temporal factors differ in different traffic contexts (a combination of different traffic factors (e.g., congestion, slope, and curvature)). Thus, CBAA first generates the perturbation only on the influential spatio-temporal signals for each context offline. In generating an attack online, CBAA uses the offline perturbation for the current context to start searching the minimum perturbation using the zeroth-order gradient descent method that will lead to the misclassification. Limiting the spatio-temporal searching scope with the constraint of context greatly expedites finding the final perturbation. Our extensive experimental studies using two different real driving datasets show that CBAA requires 43% fewer queries (to the DNN model to verify the attack success) and 53% less time than existing adversarial attacks.","PeriodicalId":120945,"journal":{"name":"2021 18th Annual IEEE International Conference on Sensing, Communication, and Networking (SECON)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 18th Annual IEEE International Conference on Sensing, Communication, and Networking (SECON)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SECON52354.2021.9491584","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3

Abstract

In a connected autonomous vehicle (CAV) scenario, each vehicle utilizes an onboard deep neural network (DNN) model to understand its received time-series driving signals (e.g., speed, brake status) from its nearby vehicles, and then takes necessary actions to increase traffic safety and roadway efficiency. In the scenario, it is plausible that an attacker may launch an adversarial attack, in which the attacker adds unnoticeable perturbation to the actual driving signals to fool the DNN model inside a victim vehicle to output a misclassified class to cause traffic congestion and/or accidents. Such an attack must be generated in near real-time and the adversarial maneuver must be consistent with the current traffic context. However, previously proposed adversarial attacks fail to meet these requirements. To handle these challenges, in this paper, we propose a Context- aware Black-box Adversarial Attack (CBAA) for time-series DNN models in CAV scenarios. By analyzing real driving datasets, we observe that specific driving signals at certain time points have a higher impact on the DNN output. These influential spatio-temporal factors differ in different traffic contexts (a combination of different traffic factors (e.g., congestion, slope, and curvature)). Thus, CBAA first generates the perturbation only on the influential spatio-temporal signals for each context offline. In generating an attack online, CBAA uses the offline perturbation for the current context to start searching the minimum perturbation using the zeroth-order gradient descent method that will lead to the misclassification. Limiting the spatio-temporal searching scope with the constraint of context greatly expedites finding the final perturbation. Our extensive experimental studies using two different real driving datasets show that CBAA requires 43% fewer queries (to the DNN model to verify the attack success) and 53% less time than existing adversarial attacks.
深度驾驶机动分类模型的上下文感知黑盒对抗攻击
在联网自动驾驶汽车(CAV)场景中,每辆车都利用车载深度神经网络(DNN)模型来理解从附近车辆接收到的时间序列驾驶信号(例如速度、刹车状态),然后采取必要的行动来提高交通安全和道路效率。在这种情况下,攻击者可能会发起对抗性攻击,攻击者会在实际驾驶信号中添加不明显的扰动,以欺骗受害者车辆内的DNN模型输出错误分类的类别,从而导致交通拥堵和/或事故。这种攻击必须在接近实时的情况下产生,并且对抗机动必须与当前的流量环境一致。然而,先前提出的对抗性攻击无法满足这些要求。为了应对这些挑战,在本文中,我们提出了一种用于CAV场景下时间序列DNN模型的上下文感知黑箱对抗攻击(CBAA)。通过对真实驾驶数据集的分析,我们观察到特定时间点的特定驾驶信号对DNN输出的影响较大。这些有影响的时空因素在不同的交通环境下(不同交通因素(如拥堵、坡度和曲率)的组合)是不同的。因此,CBAA首先仅对每个上下文的有影响的时空信号离线产生扰动。在在线生成攻击时,CBAA使用当前上下文的离线扰动开始使用零阶梯度下降法搜索导致错误分类的最小扰动。用上下文约束来限制时空搜索范围,大大加快了最终扰动的查找速度。我们使用两个不同的真实驾驶数据集进行了广泛的实验研究,结果表明,与现有的对抗性攻击相比,CBAA所需的查询(对DNN模型验证攻击成功)减少了43%,所需时间减少了53%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信