{"title":"Remote Perception Attacks against Camera-based Object Recognition Systems and Countermeasures","authors":"Yanmao Man, Ming Li, Ryan M. Gerdes","doi":"10.1145/3596221","DOIUrl":null,"url":null,"abstract":"In vision-based object recognition systems imaging sensors perceive the environment and then objects are detected and classified for decision-making purposes; e.g., to maneuver an automated vehicle around an obstacle or to raise alarms for intruders in surveillance settings. In this work we demonstrate how camera-based perception can be unobtrusively manipulated to enable an attacker to create spurious objects or alter an existing object, by remotely projecting adversarial patterns into cameras, exploiting two common effects in optical imaging systems, viz., lens flare/ghost effects and auto-exposure control. To improve the robustness of the attack, we generate optimal patterns by integrating adversarial machine learning techniques with a trained end-to-end channel model. We experimentally demonstrate our attacks using a low-cost projector on three different cameras, and under different environments. Results show that, depending on the attack distance, attack success rates can reach as high as 100%, including under targeted conditions. We develop a countermeasure that reduces the problem of detecting ghost-based attacks into verifying whether there is a ghost overlapping with a detected object. We leverage spatiotemporal consistency to eliminate false positives. Evaluation on experimental data provides a worst-case equal error rate of 5%.","PeriodicalId":7055,"journal":{"name":"ACM Transactions on Cyber-Physical Systems","volume":" ","pages":""},"PeriodicalIF":2.0000,"publicationDate":"2023-05-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"ACM Transactions on Cyber-Physical Systems","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3596221","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS","Score":null,"Total":0}
引用次数: 0
Abstract
In vision-based object recognition systems imaging sensors perceive the environment and then objects are detected and classified for decision-making purposes; e.g., to maneuver an automated vehicle around an obstacle or to raise alarms for intruders in surveillance settings. In this work we demonstrate how camera-based perception can be unobtrusively manipulated to enable an attacker to create spurious objects or alter an existing object, by remotely projecting adversarial patterns into cameras, exploiting two common effects in optical imaging systems, viz., lens flare/ghost effects and auto-exposure control. To improve the robustness of the attack, we generate optimal patterns by integrating adversarial machine learning techniques with a trained end-to-end channel model. We experimentally demonstrate our attacks using a low-cost projector on three different cameras, and under different environments. Results show that, depending on the attack distance, attack success rates can reach as high as 100%, including under targeted conditions. We develop a countermeasure that reduces the problem of detecting ghost-based attacks into verifying whether there is a ghost overlapping with a detected object. We leverage spatiotemporal consistency to eliminate false positives. Evaluation on experimental data provides a worst-case equal error rate of 5%.