Ahmed Dawod Mohammed Ibrahum, Manzoor Hussain, Jang-Eui Hong
{"title":"自动驾驶汽车中的深度学习对抗性攻击和防御:从安全角度出发的系统文献综述","authors":"Ahmed Dawod Mohammed Ibrahum, Manzoor Hussain, Jang-Eui Hong","doi":"10.1007/s10462-024-11014-8","DOIUrl":null,"url":null,"abstract":"<div><p>The integration of Deep Learning (DL) algorithms in Autonomous Vehicles (AVs) has revolutionized their precision in navigating various driving scenarios, ranging from anti-fatigue safe driving to intelligent route planning. Despite their proven effectiveness, concerns regarding the safety and reliability of DL algorithms in AVs have emerged, particularly in light of the escalating threat of adversarial attacks, as emphasized by recent research. These digital or physical attacks present formidable challenges to AV safety, relying extensively on collecting and interpreting environmental data through integrated sensors and DL. This paper addresses this pressing issue through a systematic survey that meticulously explores robust adversarial attacks and defenses, specifically focusing on DL in AVs from a safety perspective. Going beyond a review of existing research papers on adversarial attacks and defenses, the paper introduces a safety scenarios taxonomy matrix Inspired by SOTIF designed to augment the safety of DL in AVs. This matrix categorizes safety scenarios into four distinct areas and classifies attacks into those areas in three scenarios, along with two defense scenarios. Furthermore, the paper investigates the testing and evaluation measurements critical for assessing attacks in the context of DL for AVs. It further explores the dynamic landscape of datasets and simulation platforms. This contribution significantly enriches the ongoing discourse surrounding the assurance of safety and reliability in autonomous vehicles, especially in the face of continually evolving adversarial challenges.</p></div>","PeriodicalId":8449,"journal":{"name":"Artificial Intelligence Review","volume":"58 1","pages":""},"PeriodicalIF":10.7000,"publicationDate":"2024-11-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10462-024-11014-8.pdf","citationCount":"0","resultStr":"{\"title\":\"Deep learning adversarial attacks and defenses in autonomous vehicles: a systematic literature review from a safety perspective\",\"authors\":\"Ahmed Dawod Mohammed Ibrahum, Manzoor Hussain, Jang-Eui Hong\",\"doi\":\"10.1007/s10462-024-11014-8\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>The integration of Deep Learning (DL) algorithms in Autonomous Vehicles (AVs) has revolutionized their precision in navigating various driving scenarios, ranging from anti-fatigue safe driving to intelligent route planning. Despite their proven effectiveness, concerns regarding the safety and reliability of DL algorithms in AVs have emerged, particularly in light of the escalating threat of adversarial attacks, as emphasized by recent research. These digital or physical attacks present formidable challenges to AV safety, relying extensively on collecting and interpreting environmental data through integrated sensors and DL. This paper addresses this pressing issue through a systematic survey that meticulously explores robust adversarial attacks and defenses, specifically focusing on DL in AVs from a safety perspective. Going beyond a review of existing research papers on adversarial attacks and defenses, the paper introduces a safety scenarios taxonomy matrix Inspired by SOTIF designed to augment the safety of DL in AVs. This matrix categorizes safety scenarios into four distinct areas and classifies attacks into those areas in three scenarios, along with two defense scenarios. Furthermore, the paper investigates the testing and evaluation measurements critical for assessing attacks in the context of DL for AVs. It further explores the dynamic landscape of datasets and simulation platforms. This contribution significantly enriches the ongoing discourse surrounding the assurance of safety and reliability in autonomous vehicles, especially in the face of continually evolving adversarial challenges.</p></div>\",\"PeriodicalId\":8449,\"journal\":{\"name\":\"Artificial Intelligence Review\",\"volume\":\"58 1\",\"pages\":\"\"},\"PeriodicalIF\":10.7000,\"publicationDate\":\"2024-11-27\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://link.springer.com/content/pdf/10.1007/s10462-024-11014-8.pdf\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Artificial Intelligence Review\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://link.springer.com/article/10.1007/s10462-024-11014-8\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Artificial Intelligence Review","FirstCategoryId":"94","ListUrlMain":"https://link.springer.com/article/10.1007/s10462-024-11014-8","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
Deep learning adversarial attacks and defenses in autonomous vehicles: a systematic literature review from a safety perspective
The integration of Deep Learning (DL) algorithms in Autonomous Vehicles (AVs) has revolutionized their precision in navigating various driving scenarios, ranging from anti-fatigue safe driving to intelligent route planning. Despite their proven effectiveness, concerns regarding the safety and reliability of DL algorithms in AVs have emerged, particularly in light of the escalating threat of adversarial attacks, as emphasized by recent research. These digital or physical attacks present formidable challenges to AV safety, relying extensively on collecting and interpreting environmental data through integrated sensors and DL. This paper addresses this pressing issue through a systematic survey that meticulously explores robust adversarial attacks and defenses, specifically focusing on DL in AVs from a safety perspective. Going beyond a review of existing research papers on adversarial attacks and defenses, the paper introduces a safety scenarios taxonomy matrix Inspired by SOTIF designed to augment the safety of DL in AVs. This matrix categorizes safety scenarios into four distinct areas and classifies attacks into those areas in three scenarios, along with two defense scenarios. Furthermore, the paper investigates the testing and evaluation measurements critical for assessing attacks in the context of DL for AVs. It further explores the dynamic landscape of datasets and simulation platforms. This contribution significantly enriches the ongoing discourse surrounding the assurance of safety and reliability in autonomous vehicles, especially in the face of continually evolving adversarial challenges.
期刊介绍:
Artificial Intelligence Review, a fully open access journal, publishes cutting-edge research in artificial intelligence and cognitive science. It features critical evaluations of applications, techniques, and algorithms, providing a platform for both researchers and application developers. The journal includes refereed survey and tutorial articles, along with reviews and commentary on significant developments in the field.