针对自动驾驶视觉三维目标检测的对抗性补丁攻击统一框架

IF 8.3 1区 工程技术 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC
Jian Wang;Fan Li;Lijun He
{"title":"针对自动驾驶视觉三维目标检测的对抗性补丁攻击统一框架","authors":"Jian Wang;Fan Li;Lijun He","doi":"10.1109/TCSVT.2025.3525725","DOIUrl":null,"url":null,"abstract":"The rapid development of vision-based 3D perceptions, in conjunction with the inherent vulnerability of deep neural networks to adversarial examples, motivates us to investigate realistic adversarial attacks for the 3D detection models in autonomous driving scenarios. Due to the perspective transformation from 3D space to the image and object occlusion, current 2D image attacks are difficult to generalize to 3D detectors and are limited by physical feasibility. In this work, we propose a unified framework to generate physically printable adversarial patches with different attack goals: 1) instance-level hiding—pasting the learned patches to any target vehicle allows it to evade the detection process; 2) scene-level creating—placing the adversarial patch in the scene induces the detector to perceive plenty of fake objects. Both crafted patches are universal, which can take effect across a wide range of objects and scenes. To achieve above attacks, we first introduce the differentiable image-3D rendering algorithm that makes it possible to learn a patch located in 3D space. Then, two novel designs are devised to promote effective learning of patch content: 1) a Sparse Object Sampling Strategy is proposed to ensure that the rendered patches follow the perspective criterion and avoid being occluded during training, and 2) a Patch-Oriented Adversarial Optimization is used to facilitate the learning process focused on the patch areas. Both digital and physical-world experiments are conducted and demonstrate the effectiveness of our approaches, revealing potential threats when confronted with malicious attacks. We also investigate the defense strategy using adversarial augmentation to further improve the model’s robustness.","PeriodicalId":13082,"journal":{"name":"IEEE Transactions on Circuits and Systems for Video Technology","volume":"35 5","pages":"4949-4962"},"PeriodicalIF":8.3000,"publicationDate":"2025-01-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A Unified Framework for Adversarial Patch Attacks Against Visual 3D Object Detection in Autonomous Driving\",\"authors\":\"Jian Wang;Fan Li;Lijun He\",\"doi\":\"10.1109/TCSVT.2025.3525725\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The rapid development of vision-based 3D perceptions, in conjunction with the inherent vulnerability of deep neural networks to adversarial examples, motivates us to investigate realistic adversarial attacks for the 3D detection models in autonomous driving scenarios. Due to the perspective transformation from 3D space to the image and object occlusion, current 2D image attacks are difficult to generalize to 3D detectors and are limited by physical feasibility. In this work, we propose a unified framework to generate physically printable adversarial patches with different attack goals: 1) instance-level hiding—pasting the learned patches to any target vehicle allows it to evade the detection process; 2) scene-level creating—placing the adversarial patch in the scene induces the detector to perceive plenty of fake objects. Both crafted patches are universal, which can take effect across a wide range of objects and scenes. To achieve above attacks, we first introduce the differentiable image-3D rendering algorithm that makes it possible to learn a patch located in 3D space. Then, two novel designs are devised to promote effective learning of patch content: 1) a Sparse Object Sampling Strategy is proposed to ensure that the rendered patches follow the perspective criterion and avoid being occluded during training, and 2) a Patch-Oriented Adversarial Optimization is used to facilitate the learning process focused on the patch areas. Both digital and physical-world experiments are conducted and demonstrate the effectiveness of our approaches, revealing potential threats when confronted with malicious attacks. We also investigate the defense strategy using adversarial augmentation to further improve the model’s robustness.\",\"PeriodicalId\":13082,\"journal\":{\"name\":\"IEEE Transactions on Circuits and Systems for Video Technology\",\"volume\":\"35 5\",\"pages\":\"4949-4962\"},\"PeriodicalIF\":8.3000,\"publicationDate\":\"2025-01-03\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Transactions on Circuits and Systems for Video Technology\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10824853/\",\"RegionNum\":1,\"RegionCategory\":\"工程技术\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"ENGINEERING, ELECTRICAL & ELECTRONIC\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Circuits and Systems for Video Technology","FirstCategoryId":"5","ListUrlMain":"https://ieeexplore.ieee.org/document/10824853/","RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0

摘要

基于视觉的3D感知的快速发展,以及深度神经网络对对抗性示例的固有脆弱性,促使我们研究自动驾驶场景中3D检测模型的现实对抗性攻击。由于从三维空间到图像和物体遮挡的视角变换,目前的二维图像攻击难以推广到三维探测器上,并且受到物理可行性的限制。在这项工作中,我们提出了一个统一的框架来生成具有不同攻击目标的物理可打印的对抗补丁:1)实例级隐藏-将学习到的补丁粘贴到任何目标车辆上,使其能够逃避检测过程;2)场景级创建——在场景中放置对抗性补丁,诱导检测器感知大量假物体。这两个精心制作的补丁都是通用的,可以在广泛的对象和场景中生效。为了实现上述攻击,我们首先引入了可微图像-3D渲染算法,该算法可以学习位于3D空间中的补丁。然后,提出了两种新的设计来促进斑块内容的有效学习:1)提出了稀疏对象采样策略,以确保绘制的斑块遵循透视准则,避免在训练过程中被遮挡;2)采用了面向斑块的对抗优化,以促进专注于斑块区域的学习过程。进行了数字和物理世界的实验,并证明了我们的方法的有效性,揭示了面对恶意攻击时的潜在威胁。我们还研究了使用对抗增强的防御策略,以进一步提高模型的鲁棒性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
A Unified Framework for Adversarial Patch Attacks Against Visual 3D Object Detection in Autonomous Driving
The rapid development of vision-based 3D perceptions, in conjunction with the inherent vulnerability of deep neural networks to adversarial examples, motivates us to investigate realistic adversarial attacks for the 3D detection models in autonomous driving scenarios. Due to the perspective transformation from 3D space to the image and object occlusion, current 2D image attacks are difficult to generalize to 3D detectors and are limited by physical feasibility. In this work, we propose a unified framework to generate physically printable adversarial patches with different attack goals: 1) instance-level hiding—pasting the learned patches to any target vehicle allows it to evade the detection process; 2) scene-level creating—placing the adversarial patch in the scene induces the detector to perceive plenty of fake objects. Both crafted patches are universal, which can take effect across a wide range of objects and scenes. To achieve above attacks, we first introduce the differentiable image-3D rendering algorithm that makes it possible to learn a patch located in 3D space. Then, two novel designs are devised to promote effective learning of patch content: 1) a Sparse Object Sampling Strategy is proposed to ensure that the rendered patches follow the perspective criterion and avoid being occluded during training, and 2) a Patch-Oriented Adversarial Optimization is used to facilitate the learning process focused on the patch areas. Both digital and physical-world experiments are conducted and demonstrate the effectiveness of our approaches, revealing potential threats when confronted with malicious attacks. We also investigate the defense strategy using adversarial augmentation to further improve the model’s robustness.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
13.80
自引率
27.40%
发文量
660
审稿时长
5 months
期刊介绍: The IEEE Transactions on Circuits and Systems for Video Technology (TCSVT) is dedicated to covering all aspects of video technologies from a circuits and systems perspective. We encourage submissions of general, theoretical, and application-oriented papers related to image and video acquisition, representation, presentation, and display. Additionally, we welcome contributions in areas such as processing, filtering, and transforms; analysis and synthesis; learning and understanding; compression, transmission, communication, and networking; as well as storage, retrieval, indexing, and search. Furthermore, papers focusing on hardware and software design and implementation are highly valued. Join us in advancing the field of video technology through innovative research and insights.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信