Alvaro Lopez Pellicer, Plamen Angelov, Neeraj Suri
{"title":"保护(基于视觉的)自治系统:针对敌对威胁的分类、挑战和防御机制","authors":"Alvaro Lopez Pellicer, Plamen Angelov, Neeraj Suri","doi":"10.1007/s10462-025-11373-w","DOIUrl":null,"url":null,"abstract":"<div><p>The rapid integration of computer vision into Autonomous Systems (AS) has introduced new vulnerabilities, particularly in the form of adversarial threats capable of manipulating perception and control modules. While multiple surveys have addressed adversarial robustness in deep learning, few have systematically analyzed how these threats manifest across the full stack and life-cycle of AS. This review bridges that gap by presenting a structured synthesis that spans both, foundational vision-centric literature and recent AS-specific advances, with focus on digital and physical threat vectors. We introduce a unified framework mapping adversarial threats across the AS stack and life-cycle, supported by three novel analytical matrices: the <i>Life-cycle–Attack Matrix</i> (linking attacks to data, training, and inference stages), the <i>Stack–Threat Matrix</i> (localizing vulnerabilities throughout the autonomy stack), and the <i>Exposure–Impact Matrix</i> (connecting attack exposure to AI design vulnerabilities and operational consequences). Drawing on these models, we define holistic requirements for effective AS defenses and critically appraise the current landscape of adversarial robustness. Finally, we propose the <i>AS-ADS</i> scoring framework to enable comparative assessment of defense methods in terms of their alignment with the practical needs of AS, and outline actionable directions for advancing the robustness of vision-based autonomous systems.</p></div>","PeriodicalId":8449,"journal":{"name":"Artificial Intelligence Review","volume":"58 12","pages":""},"PeriodicalIF":13.9000,"publicationDate":"2025-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10462-025-11373-w.pdf","citationCount":"0","resultStr":"{\"title\":\"Securing (vision-based) autonomous systems: taxonomy, challenges, and defense mechanisms against adversarial threats\",\"authors\":\"Alvaro Lopez Pellicer, Plamen Angelov, Neeraj Suri\",\"doi\":\"10.1007/s10462-025-11373-w\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>The rapid integration of computer vision into Autonomous Systems (AS) has introduced new vulnerabilities, particularly in the form of adversarial threats capable of manipulating perception and control modules. While multiple surveys have addressed adversarial robustness in deep learning, few have systematically analyzed how these threats manifest across the full stack and life-cycle of AS. This review bridges that gap by presenting a structured synthesis that spans both, foundational vision-centric literature and recent AS-specific advances, with focus on digital and physical threat vectors. We introduce a unified framework mapping adversarial threats across the AS stack and life-cycle, supported by three novel analytical matrices: the <i>Life-cycle–Attack Matrix</i> (linking attacks to data, training, and inference stages), the <i>Stack–Threat Matrix</i> (localizing vulnerabilities throughout the autonomy stack), and the <i>Exposure–Impact Matrix</i> (connecting attack exposure to AI design vulnerabilities and operational consequences). Drawing on these models, we define holistic requirements for effective AS defenses and critically appraise the current landscape of adversarial robustness. Finally, we propose the <i>AS-ADS</i> scoring framework to enable comparative assessment of defense methods in terms of their alignment with the practical needs of AS, and outline actionable directions for advancing the robustness of vision-based autonomous systems.</p></div>\",\"PeriodicalId\":8449,\"journal\":{\"name\":\"Artificial Intelligence Review\",\"volume\":\"58 12\",\"pages\":\"\"},\"PeriodicalIF\":13.9000,\"publicationDate\":\"2025-10-08\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://link.springer.com/content/pdf/10.1007/s10462-025-11373-w.pdf\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Artificial Intelligence Review\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://link.springer.com/article/10.1007/s10462-025-11373-w\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Artificial Intelligence Review","FirstCategoryId":"94","ListUrlMain":"https://link.springer.com/article/10.1007/s10462-025-11373-w","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
Securing (vision-based) autonomous systems: taxonomy, challenges, and defense mechanisms against adversarial threats
The rapid integration of computer vision into Autonomous Systems (AS) has introduced new vulnerabilities, particularly in the form of adversarial threats capable of manipulating perception and control modules. While multiple surveys have addressed adversarial robustness in deep learning, few have systematically analyzed how these threats manifest across the full stack and life-cycle of AS. This review bridges that gap by presenting a structured synthesis that spans both, foundational vision-centric literature and recent AS-specific advances, with focus on digital and physical threat vectors. We introduce a unified framework mapping adversarial threats across the AS stack and life-cycle, supported by three novel analytical matrices: the Life-cycle–Attack Matrix (linking attacks to data, training, and inference stages), the Stack–Threat Matrix (localizing vulnerabilities throughout the autonomy stack), and the Exposure–Impact Matrix (connecting attack exposure to AI design vulnerabilities and operational consequences). Drawing on these models, we define holistic requirements for effective AS defenses and critically appraise the current landscape of adversarial robustness. Finally, we propose the AS-ADS scoring framework to enable comparative assessment of defense methods in terms of their alignment with the practical needs of AS, and outline actionable directions for advancing the robustness of vision-based autonomous systems.
期刊介绍:
Artificial Intelligence Review, a fully open access journal, publishes cutting-edge research in artificial intelligence and cognitive science. It features critical evaluations of applications, techniques, and algorithms, providing a platform for both researchers and application developers. The journal includes refereed survey and tutorial articles, along with reviews and commentary on significant developments in the field.