Securing (vision-based) autonomous systems: taxonomy, challenges, and defense mechanisms against adversarial threats

IF 13.9 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Alvaro Lopez Pellicer, Plamen Angelov, Neeraj Suri
{"title":"Securing (vision-based) autonomous systems: taxonomy, challenges, and defense mechanisms against adversarial threats","authors":"Alvaro Lopez Pellicer,&nbsp;Plamen Angelov,&nbsp;Neeraj Suri","doi":"10.1007/s10462-025-11373-w","DOIUrl":null,"url":null,"abstract":"<div><p>The rapid integration of computer vision into Autonomous Systems (AS) has introduced new vulnerabilities, particularly in the form of adversarial threats capable of manipulating perception and control modules. While multiple surveys have addressed adversarial robustness in deep learning, few have systematically analyzed how these threats manifest across the full stack and life-cycle of AS. This review bridges that gap by presenting a structured synthesis that spans both, foundational vision-centric literature and recent AS-specific advances, with focus on digital and physical threat vectors. We introduce a unified framework mapping adversarial threats across the AS stack and life-cycle, supported by three novel analytical matrices: the <i>Life-cycle–Attack Matrix</i> (linking attacks to data, training, and inference stages), the <i>Stack–Threat Matrix</i> (localizing vulnerabilities throughout the autonomy stack), and the <i>Exposure–Impact Matrix</i> (connecting attack exposure to AI design vulnerabilities and operational consequences). Drawing on these models, we define holistic requirements for effective AS defenses and critically appraise the current landscape of adversarial robustness. Finally, we propose the <i>AS-ADS</i> scoring framework to enable comparative assessment of defense methods in terms of their alignment with the practical needs of AS, and outline actionable directions for advancing the robustness of vision-based autonomous systems.</p></div>","PeriodicalId":8449,"journal":{"name":"Artificial Intelligence Review","volume":"58 12","pages":""},"PeriodicalIF":13.9000,"publicationDate":"2025-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10462-025-11373-w.pdf","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Artificial Intelligence Review","FirstCategoryId":"94","ListUrlMain":"https://link.springer.com/article/10.1007/s10462-025-11373-w","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

The rapid integration of computer vision into Autonomous Systems (AS) has introduced new vulnerabilities, particularly in the form of adversarial threats capable of manipulating perception and control modules. While multiple surveys have addressed adversarial robustness in deep learning, few have systematically analyzed how these threats manifest across the full stack and life-cycle of AS. This review bridges that gap by presenting a structured synthesis that spans both, foundational vision-centric literature and recent AS-specific advances, with focus on digital and physical threat vectors. We introduce a unified framework mapping adversarial threats across the AS stack and life-cycle, supported by three novel analytical matrices: the Life-cycle–Attack Matrix (linking attacks to data, training, and inference stages), the Stack–Threat Matrix (localizing vulnerabilities throughout the autonomy stack), and the Exposure–Impact Matrix (connecting attack exposure to AI design vulnerabilities and operational consequences). Drawing on these models, we define holistic requirements for effective AS defenses and critically appraise the current landscape of adversarial robustness. Finally, we propose the AS-ADS scoring framework to enable comparative assessment of defense methods in terms of their alignment with the practical needs of AS, and outline actionable directions for advancing the robustness of vision-based autonomous systems.

保护(基于视觉的)自治系统:针对敌对威胁的分类、挑战和防御机制
计算机视觉与自主系统(AS)的快速集成带来了新的漏洞,特别是能够操纵感知和控制模块的对抗性威胁。虽然许多调查已经解决了深度学习中的对抗性鲁棒性问题,但很少有人系统地分析这些威胁如何在AS的整个堆栈和生命周期中表现出来。这篇综述通过提出一个结构化的综合来弥补这一差距,该综合涵盖了以视觉为中心的基础文献和最近的as特定进展,重点关注数字和物理威胁向量。我们引入了一个统一的框架,映射整个AS堆栈和生命周期的对抗性威胁,由三个新的分析矩阵支持:生命周期攻击矩阵(将攻击与数据,训练和推理阶段联系起来),堆栈威胁矩阵(在整个自治堆栈中定位漏洞)和暴露影响矩阵(将攻击暴露与AI设计漏洞和操作后果联系起来)。利用这些模型,我们定义了有效AS防御的整体要求,并批判性地评估了对抗性稳健性的当前景观。最后,我们提出了AS- ads评分框架,以便根据AS的实际需求对防御方法进行比较评估,并概述了推进基于视觉的自主系统鲁棒性的可操作方向。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Artificial Intelligence Review
Artificial Intelligence Review 工程技术-计算机:人工智能
CiteScore
22.00
自引率
3.30%
发文量
194
审稿时长
5.3 months
期刊介绍: Artificial Intelligence Review, a fully open access journal, publishes cutting-edge research in artificial intelligence and cognitive science. It features critical evaluations of applications, techniques, and algorithms, providing a platform for both researchers and application developers. The journal includes refereed survey and tutorial articles, along with reviews and commentary on significant developments in the field.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信