Mumuxin Cai , Xupeng Wang , Ferdous Sohel , Dian Xiao , Hang Lei
{"title":"具有潜在特征干扰的三维目标检测的可转移通用对抗性攻击","authors":"Mumuxin Cai , Xupeng Wang , Ferdous Sohel , Dian Xiao , Hang Lei","doi":"10.1016/j.sysarc.2025.103446","DOIUrl":null,"url":null,"abstract":"<div><div>3D object detection models are highly vulnerable to adversarial attacks, which expose their weaknesses and when addressed, help improve the robustness of the models. Existing adversarial attack methods against LiDAR scene are typically optimized for a single sample and perform poorly in terms of transferability. Adversarial attacks with universal and transferable abilities can bring further guidelines for the robustness study of 3D object detection. In this paper, we propose a universal adversarial perturbation attack against 3D object detection models, which suppresses the detection results and disrupts the latent features simultaneously. Specifically, the universal adversarial perturbation is generated to launch sample-agnostic attacks, which is encoded in elaborate perturbation voxel units and is adaptive to varying scales of LiDAR scenes, as well as 3D object detectors with different point cloud representations. The proposed transferable attack focuses on the latent feature space and deviates the detectors at outputs of shallow layers. Moreover, a layer activation loss function is designed, which suppresses the significant features extracted by the backbone network. Extensive experiments on multiple popular 3D object detectors and large-scale datasets demonstrate that the proposed method achieves superior attack success rates, exposing critical robustness issues in current LiDAR-based 3D object detection models.</div></div>","PeriodicalId":50027,"journal":{"name":"Journal of Systems Architecture","volume":"166 ","pages":"Article 103446"},"PeriodicalIF":3.7000,"publicationDate":"2025-05-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Transferable universal adversarial attack against 3D object detection with latent feature disruption\",\"authors\":\"Mumuxin Cai , Xupeng Wang , Ferdous Sohel , Dian Xiao , Hang Lei\",\"doi\":\"10.1016/j.sysarc.2025.103446\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>3D object detection models are highly vulnerable to adversarial attacks, which expose their weaknesses and when addressed, help improve the robustness of the models. Existing adversarial attack methods against LiDAR scene are typically optimized for a single sample and perform poorly in terms of transferability. Adversarial attacks with universal and transferable abilities can bring further guidelines for the robustness study of 3D object detection. In this paper, we propose a universal adversarial perturbation attack against 3D object detection models, which suppresses the detection results and disrupts the latent features simultaneously. Specifically, the universal adversarial perturbation is generated to launch sample-agnostic attacks, which is encoded in elaborate perturbation voxel units and is adaptive to varying scales of LiDAR scenes, as well as 3D object detectors with different point cloud representations. The proposed transferable attack focuses on the latent feature space and deviates the detectors at outputs of shallow layers. Moreover, a layer activation loss function is designed, which suppresses the significant features extracted by the backbone network. Extensive experiments on multiple popular 3D object detectors and large-scale datasets demonstrate that the proposed method achieves superior attack success rates, exposing critical robustness issues in current LiDAR-based 3D object detection models.</div></div>\",\"PeriodicalId\":50027,\"journal\":{\"name\":\"Journal of Systems Architecture\",\"volume\":\"166 \",\"pages\":\"Article 103446\"},\"PeriodicalIF\":3.7000,\"publicationDate\":\"2025-05-22\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Systems Architecture\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S1383762125001183\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Systems Architecture","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1383762125001183","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE","Score":null,"Total":0}
Transferable universal adversarial attack against 3D object detection with latent feature disruption
3D object detection models are highly vulnerable to adversarial attacks, which expose their weaknesses and when addressed, help improve the robustness of the models. Existing adversarial attack methods against LiDAR scene are typically optimized for a single sample and perform poorly in terms of transferability. Adversarial attacks with universal and transferable abilities can bring further guidelines for the robustness study of 3D object detection. In this paper, we propose a universal adversarial perturbation attack against 3D object detection models, which suppresses the detection results and disrupts the latent features simultaneously. Specifically, the universal adversarial perturbation is generated to launch sample-agnostic attacks, which is encoded in elaborate perturbation voxel units and is adaptive to varying scales of LiDAR scenes, as well as 3D object detectors with different point cloud representations. The proposed transferable attack focuses on the latent feature space and deviates the detectors at outputs of shallow layers. Moreover, a layer activation loss function is designed, which suppresses the significant features extracted by the backbone network. Extensive experiments on multiple popular 3D object detectors and large-scale datasets demonstrate that the proposed method achieves superior attack success rates, exposing critical robustness issues in current LiDAR-based 3D object detection models.
期刊介绍:
The Journal of Systems Architecture: Embedded Software Design (JSA) is a journal covering all design and architectural aspects related to embedded systems and software. It ranges from the microarchitecture level via the system software level up to the application-specific architecture level. Aspects such as real-time systems, operating systems, FPGA programming, programming languages, communications (limited to analysis and the software stack), mobile systems, parallel and distributed architectures as well as additional subjects in the computer and system architecture area will fall within the scope of this journal. Technology will not be a main focus, but its use and relevance to particular designs will be. Case studies are welcome but must contribute more than just a design for a particular piece of software.
Design automation of such systems including methodologies, techniques and tools for their design as well as novel designs of software components fall within the scope of this journal. Novel applications that use embedded systems are also central in this journal. While hardware is not a part of this journal hardware/software co-design methods that consider interplay between software and hardware components with and emphasis on software are also relevant here.