Guojia Li , Simin Xu , Yan Cao , Mingyue Cao , Yihong Zhang
{"title":"Fight light with light: A review of physical adversarial attack within light transmission pipeline","authors":"Guojia Li , Simin Xu , Yan Cao , Mingyue Cao , Yihong Zhang","doi":"10.1016/j.neucom.2026.133034","DOIUrl":null,"url":null,"abstract":"<div><div>Deep Neural Networks (DNNs) remain vulnerable to physical adversarial attacks. Attacks that target the light transmission pipeline exhibit heightened stealthiness while posing severe real-world threats due to their flexible and deployable nature. To advance the understanding of this emerging threat, we establish a unified framework that systematically analyzes the entire light transmission pipeline as a contiguous attack surface. Within this framework, we identify two primary attack vectors, manipulating light transmission channel and attacking image perception device, and systematically characterize their methodologies across nine key attributes. We further formalize the optimization process for generating adversarial light patterns and assess the physical deployment methods of such attacks. Furthermore, we propose a graded framework for evaluating the transferability and demonstrate that while physical adversarial examples in this domain exhibit high stealthiness, their transferability across different model architectures remains limited. Finally, we outline current challenges and discuss future research directions.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"676 ","pages":"Article 133034"},"PeriodicalIF":6.5000,"publicationDate":"2026-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neurocomputing","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0925231226004315","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2026/2/16 0:00:00","PubModel":"Epub","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Deep Neural Networks (DNNs) remain vulnerable to physical adversarial attacks. Attacks that target the light transmission pipeline exhibit heightened stealthiness while posing severe real-world threats due to their flexible and deployable nature. To advance the understanding of this emerging threat, we establish a unified framework that systematically analyzes the entire light transmission pipeline as a contiguous attack surface. Within this framework, we identify two primary attack vectors, manipulating light transmission channel and attacking image perception device, and systematically characterize their methodologies across nine key attributes. We further formalize the optimization process for generating adversarial light patterns and assess the physical deployment methods of such attacks. Furthermore, we propose a graded framework for evaluating the transferability and demonstrate that while physical adversarial examples in this domain exhibit high stealthiness, their transferability across different model architectures remains limited. Finally, we outline current challenges and discuss future research directions.
期刊介绍:
Neurocomputing publishes articles describing recent fundamental contributions in the field of neurocomputing. Neurocomputing theory, practice and applications are the essential topics being covered.