Boyuan Zhang;Jiaxu Li;Yucheng Shi;Yahong Han;Qinghua Hu
{"title":"AdvNeRF:用NeRF生成3D对抗网格来欺骗驾驶车辆","authors":"Boyuan Zhang;Jiaxu Li;Yucheng Shi;Yahong Han;Qinghua Hu","doi":"10.1109/TIFS.2025.3609180","DOIUrl":null,"url":null,"abstract":"Adversarial attacks on deep neural networks (DNNs) have raised significant concerns, particularly in safety-critical applications such as autonomous driving. Autonomous vehicles rely on both vision and LiDAR sensors to provide accurate 3D visual perception of their surroundings. However, adversarial vulnerabilities in these models pose several risks, as they can lead to misinterpretation of sensor data, ultimately endangering safety. While substantial research has been devoted to image-level adversarial attacks, these efforts are predominantly confined to 2D-pixel spaces, lacking physical realism and applicability in the 3D world. To address these limitations, we introduce AdvNeRF, a groundbreaking approach for generating 3D adversarial meshes that effectively target both vision and LiDAR models simultaneously. AdvNeRF is a Transferable Target Adversarial Attack that leverages Neural Radiance Fields (NeRF) to achieve its objectives. NeRF ensures the creation of high-quality adversarial objects and enhances attack performance by maintaining consistency across unseen viewpoints, making the adversarial examples robust from multiple angles. By integrating NeRF, our method represents a leap forward in improving the robustness and effectiveness of 3D adversarial attacks. Experimental results validate the superior performance of AdvNeRF, demonstrating its ability to degrade the accuracy of 3D object detectors under various conditions. These findings highlight the critical implications of AdvNeRF, emphasizing its potential to consistently undermine the perception systems of autonomous vehicles across different perspectives, thus marking an advancement in the field of adversarial attacks and 3D perception security.","PeriodicalId":13492,"journal":{"name":"IEEE Transactions on Information Forensics and Security","volume":"20 ","pages":"9673-9684"},"PeriodicalIF":8.0000,"publicationDate":"2025-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"AdvNeRF: Generating 3D Adversarial Meshes With NeRF to Fool Driving Vehicles\",\"authors\":\"Boyuan Zhang;Jiaxu Li;Yucheng Shi;Yahong Han;Qinghua Hu\",\"doi\":\"10.1109/TIFS.2025.3609180\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Adversarial attacks on deep neural networks (DNNs) have raised significant concerns, particularly in safety-critical applications such as autonomous driving. Autonomous vehicles rely on both vision and LiDAR sensors to provide accurate 3D visual perception of their surroundings. However, adversarial vulnerabilities in these models pose several risks, as they can lead to misinterpretation of sensor data, ultimately endangering safety. While substantial research has been devoted to image-level adversarial attacks, these efforts are predominantly confined to 2D-pixel spaces, lacking physical realism and applicability in the 3D world. To address these limitations, we introduce AdvNeRF, a groundbreaking approach for generating 3D adversarial meshes that effectively target both vision and LiDAR models simultaneously. AdvNeRF is a Transferable Target Adversarial Attack that leverages Neural Radiance Fields (NeRF) to achieve its objectives. NeRF ensures the creation of high-quality adversarial objects and enhances attack performance by maintaining consistency across unseen viewpoints, making the adversarial examples robust from multiple angles. By integrating NeRF, our method represents a leap forward in improving the robustness and effectiveness of 3D adversarial attacks. Experimental results validate the superior performance of AdvNeRF, demonstrating its ability to degrade the accuracy of 3D object detectors under various conditions. These findings highlight the critical implications of AdvNeRF, emphasizing its potential to consistently undermine the perception systems of autonomous vehicles across different perspectives, thus marking an advancement in the field of adversarial attacks and 3D perception security.\",\"PeriodicalId\":13492,\"journal\":{\"name\":\"IEEE Transactions on Information Forensics and Security\",\"volume\":\"20 \",\"pages\":\"9673-9684\"},\"PeriodicalIF\":8.0000,\"publicationDate\":\"2025-09-11\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Transactions on Information Forensics and Security\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/11159325/\",\"RegionNum\":1,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, THEORY & METHODS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Information Forensics and Security","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/11159325/","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, THEORY & METHODS","Score":null,"Total":0}
AdvNeRF: Generating 3D Adversarial Meshes With NeRF to Fool Driving Vehicles
Adversarial attacks on deep neural networks (DNNs) have raised significant concerns, particularly in safety-critical applications such as autonomous driving. Autonomous vehicles rely on both vision and LiDAR sensors to provide accurate 3D visual perception of their surroundings. However, adversarial vulnerabilities in these models pose several risks, as they can lead to misinterpretation of sensor data, ultimately endangering safety. While substantial research has been devoted to image-level adversarial attacks, these efforts are predominantly confined to 2D-pixel spaces, lacking physical realism and applicability in the 3D world. To address these limitations, we introduce AdvNeRF, a groundbreaking approach for generating 3D adversarial meshes that effectively target both vision and LiDAR models simultaneously. AdvNeRF is a Transferable Target Adversarial Attack that leverages Neural Radiance Fields (NeRF) to achieve its objectives. NeRF ensures the creation of high-quality adversarial objects and enhances attack performance by maintaining consistency across unseen viewpoints, making the adversarial examples robust from multiple angles. By integrating NeRF, our method represents a leap forward in improving the robustness and effectiveness of 3D adversarial attacks. Experimental results validate the superior performance of AdvNeRF, demonstrating its ability to degrade the accuracy of 3D object detectors under various conditions. These findings highlight the critical implications of AdvNeRF, emphasizing its potential to consistently undermine the perception systems of autonomous vehicles across different perspectives, thus marking an advancement in the field of adversarial attacks and 3D perception security.
期刊介绍:
The IEEE Transactions on Information Forensics and Security covers the sciences, technologies, and applications relating to information forensics, information security, biometrics, surveillance and systems applications that incorporate these features