AdvNeRF:用NeRF生成3D对抗网格来欺骗驾驶车辆

IF 8 1区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS
Boyuan Zhang;Jiaxu Li;Yucheng Shi;Yahong Han;Qinghua Hu
{"title":"AdvNeRF:用NeRF生成3D对抗网格来欺骗驾驶车辆","authors":"Boyuan Zhang;Jiaxu Li;Yucheng Shi;Yahong Han;Qinghua Hu","doi":"10.1109/TIFS.2025.3609180","DOIUrl":null,"url":null,"abstract":"Adversarial attacks on deep neural networks (DNNs) have raised significant concerns, particularly in safety-critical applications such as autonomous driving. Autonomous vehicles rely on both vision and LiDAR sensors to provide accurate 3D visual perception of their surroundings. However, adversarial vulnerabilities in these models pose several risks, as they can lead to misinterpretation of sensor data, ultimately endangering safety. While substantial research has been devoted to image-level adversarial attacks, these efforts are predominantly confined to 2D-pixel spaces, lacking physical realism and applicability in the 3D world. To address these limitations, we introduce AdvNeRF, a groundbreaking approach for generating 3D adversarial meshes that effectively target both vision and LiDAR models simultaneously. AdvNeRF is a Transferable Target Adversarial Attack that leverages Neural Radiance Fields (NeRF) to achieve its objectives. NeRF ensures the creation of high-quality adversarial objects and enhances attack performance by maintaining consistency across unseen viewpoints, making the adversarial examples robust from multiple angles. By integrating NeRF, our method represents a leap forward in improving the robustness and effectiveness of 3D adversarial attacks. Experimental results validate the superior performance of AdvNeRF, demonstrating its ability to degrade the accuracy of 3D object detectors under various conditions. These findings highlight the critical implications of AdvNeRF, emphasizing its potential to consistently undermine the perception systems of autonomous vehicles across different perspectives, thus marking an advancement in the field of adversarial attacks and 3D perception security.","PeriodicalId":13492,"journal":{"name":"IEEE Transactions on Information Forensics and Security","volume":"20 ","pages":"9673-9684"},"PeriodicalIF":8.0000,"publicationDate":"2025-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"AdvNeRF: Generating 3D Adversarial Meshes With NeRF to Fool Driving Vehicles\",\"authors\":\"Boyuan Zhang;Jiaxu Li;Yucheng Shi;Yahong Han;Qinghua Hu\",\"doi\":\"10.1109/TIFS.2025.3609180\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Adversarial attacks on deep neural networks (DNNs) have raised significant concerns, particularly in safety-critical applications such as autonomous driving. Autonomous vehicles rely on both vision and LiDAR sensors to provide accurate 3D visual perception of their surroundings. However, adversarial vulnerabilities in these models pose several risks, as they can lead to misinterpretation of sensor data, ultimately endangering safety. While substantial research has been devoted to image-level adversarial attacks, these efforts are predominantly confined to 2D-pixel spaces, lacking physical realism and applicability in the 3D world. To address these limitations, we introduce AdvNeRF, a groundbreaking approach for generating 3D adversarial meshes that effectively target both vision and LiDAR models simultaneously. AdvNeRF is a Transferable Target Adversarial Attack that leverages Neural Radiance Fields (NeRF) to achieve its objectives. NeRF ensures the creation of high-quality adversarial objects and enhances attack performance by maintaining consistency across unseen viewpoints, making the adversarial examples robust from multiple angles. By integrating NeRF, our method represents a leap forward in improving the robustness and effectiveness of 3D adversarial attacks. Experimental results validate the superior performance of AdvNeRF, demonstrating its ability to degrade the accuracy of 3D object detectors under various conditions. These findings highlight the critical implications of AdvNeRF, emphasizing its potential to consistently undermine the perception systems of autonomous vehicles across different perspectives, thus marking an advancement in the field of adversarial attacks and 3D perception security.\",\"PeriodicalId\":13492,\"journal\":{\"name\":\"IEEE Transactions on Information Forensics and Security\",\"volume\":\"20 \",\"pages\":\"9673-9684\"},\"PeriodicalIF\":8.0000,\"publicationDate\":\"2025-09-11\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Transactions on Information Forensics and Security\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/11159325/\",\"RegionNum\":1,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, THEORY & METHODS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Information Forensics and Security","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/11159325/","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, THEORY & METHODS","Score":null,"Total":0}
引用次数: 0

摘要

针对深度神经网络(dnn)的对抗性攻击已经引起了人们的极大关注,特别是在自动驾驶等安全关键应用中。自动驾驶汽车依靠视觉和激光雷达传感器来提供对周围环境的精确3D视觉感知。然而,这些模型中的对抗性漏洞会带来一些风险,因为它们可能导致对传感器数据的误解,最终危及安全。虽然大量研究致力于图像级对抗性攻击,但这些努力主要局限于2d像素空间,缺乏物理真实感和在3D世界中的适用性。为了解决这些限制,我们引入了AdvNeRF,这是一种开创性的方法,可以同时有效地针对视觉和激光雷达模型生成3D对抗网格。AdvNeRF是一种可转移的目标对抗性攻击,利用神经辐射场(NeRF)来实现其目标。NeRF确保创建高质量的对抗对象,并通过保持不可见视点的一致性来增强攻击性能,使对抗示例从多个角度健壮。通过集成NeRF,我们的方法在提高3D对抗性攻击的鲁棒性和有效性方面取得了飞跃。实验结果验证了AdvNeRF的优越性能,证明了其在各种条件下降低3D目标探测器精度的能力。这些发现强调了AdvNeRF的重要意义,强调了其在不同角度持续破坏自动驾驶汽车感知系统的潜力,从而标志着对抗性攻击和3D感知安全领域的进步。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
AdvNeRF: Generating 3D Adversarial Meshes With NeRF to Fool Driving Vehicles
Adversarial attacks on deep neural networks (DNNs) have raised significant concerns, particularly in safety-critical applications such as autonomous driving. Autonomous vehicles rely on both vision and LiDAR sensors to provide accurate 3D visual perception of their surroundings. However, adversarial vulnerabilities in these models pose several risks, as they can lead to misinterpretation of sensor data, ultimately endangering safety. While substantial research has been devoted to image-level adversarial attacks, these efforts are predominantly confined to 2D-pixel spaces, lacking physical realism and applicability in the 3D world. To address these limitations, we introduce AdvNeRF, a groundbreaking approach for generating 3D adversarial meshes that effectively target both vision and LiDAR models simultaneously. AdvNeRF is a Transferable Target Adversarial Attack that leverages Neural Radiance Fields (NeRF) to achieve its objectives. NeRF ensures the creation of high-quality adversarial objects and enhances attack performance by maintaining consistency across unseen viewpoints, making the adversarial examples robust from multiple angles. By integrating NeRF, our method represents a leap forward in improving the robustness and effectiveness of 3D adversarial attacks. Experimental results validate the superior performance of AdvNeRF, demonstrating its ability to degrade the accuracy of 3D object detectors under various conditions. These findings highlight the critical implications of AdvNeRF, emphasizing its potential to consistently undermine the perception systems of autonomous vehicles across different perspectives, thus marking an advancement in the field of adversarial attacks and 3D perception security.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
IEEE Transactions on Information Forensics and Security
IEEE Transactions on Information Forensics and Security 工程技术-工程:电子与电气
CiteScore
14.40
自引率
7.40%
发文量
234
审稿时长
6.5 months
期刊介绍: The IEEE Transactions on Information Forensics and Security covers the sciences, technologies, and applications relating to information forensics, information security, biometrics, surveillance and systems applications that incorporate these features
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信