WIP: On Robustness of Lane Detection Models to Physical-World Adversarial Attacks

Takami Sato, Qi Alfred Chen
{"title":"WIP: On Robustness of Lane Detection Models to Physical-World Adversarial Attacks","authors":"Takami Sato, Qi Alfred Chen","doi":"10.14722/autosec.2022.23037","DOIUrl":null,"url":null,"abstract":"—Deep Neural Network (DNN)-based lane detection is widely utilized in autonomous driving technologies. At the same time, recent studies demonstrate that adversarial attacks on lane detection can cause serious consequences on particular production-grade autonomous driving systems. However, the gen- erality of the attacks, especially their effectiveness against other state-of-the-art lane detection approaches, has not been well stud- ied. In this work, we report our progress on conducting the first large-scale empirical study to evaluate the robustness of 4 major types of lane detection methods under 3 types of physical-world adversarial attacks in end-to-end driving scenarios. We find that each lane detection method has different security characteristics, and in particular, some models are highly vulnerable to certain types of attack. Surprisingly, but probably not coincidentally, popular production lane centering systems properly select the lane detection approach which shows higher resistance to such attacks. In the near future, more and more automakers will include autonomous driving features in their products. We hope that our research will help as many automakers as possible to recognize the risks in choosing lane detection algorithms.","PeriodicalId":399600,"journal":{"name":"Proceedings Fourth International Workshop on Automotive and Autonomous Vehicle Security","volume":"5 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings Fourth International Workshop on Automotive and Autonomous Vehicle Security","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.14722/autosec.2022.23037","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

—Deep Neural Network (DNN)-based lane detection is widely utilized in autonomous driving technologies. At the same time, recent studies demonstrate that adversarial attacks on lane detection can cause serious consequences on particular production-grade autonomous driving systems. However, the gen- erality of the attacks, especially their effectiveness against other state-of-the-art lane detection approaches, has not been well stud- ied. In this work, we report our progress on conducting the first large-scale empirical study to evaluate the robustness of 4 major types of lane detection methods under 3 types of physical-world adversarial attacks in end-to-end driving scenarios. We find that each lane detection method has different security characteristics, and in particular, some models are highly vulnerable to certain types of attack. Surprisingly, but probably not coincidentally, popular production lane centering systems properly select the lane detection approach which shows higher resistance to such attacks. In the near future, more and more automakers will include autonomous driving features in their products. We hope that our research will help as many automakers as possible to recognize the risks in choosing lane detection algorithms.
车道检测模型对物理世界对抗性攻击的鲁棒性研究
基于深度神经网络(deep Neural Network, DNN)的车道检测在自动驾驶技术中有着广泛的应用。与此同时,最近的研究表明,对车道检测的对抗性攻击可能会对特定的生产级自动驾驶系统造成严重后果。然而,攻击的普遍性,特别是它们对其他最先进的车道检测方法的有效性,尚未得到很好的研究。在这项工作中,我们报告了我们在进行第一次大规模实证研究方面的进展,以评估端到端驾驶场景中3种物理世界对抗性攻击下4种主要类型的车道检测方法的鲁棒性。我们发现每种通道检测方法都具有不同的安全特性,特别是某些模型极易受到某些类型的攻击。令人惊讶的是,但可能不是巧合的是,流行的生产车道中心系统正确地选择了车道检测方法,这种方法对此类攻击具有更高的抵抗力。在不久的将来,越来越多的汽车制造商将在他们的产品中加入自动驾驶功能。我们希望我们的研究能够帮助尽可能多的汽车制造商在选择车道检测算法时认识到风险。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信