{"title":"Fundamental Study of Adversarial Examples Created by Fault Injection Attack on Image Sensor Interface","authors":"Tatsuya Oyama, Kota Yoshida, S. Okura, T. Fujino","doi":"10.1109/AsianHOST56390.2022.10022189","DOIUrl":null,"url":null,"abstract":"Adversarial examples (AEs), which cause misclassification by adding subtle perturbations to input images, have been proposed as an attack method on image classification systems using deep neural networks (DNNs). Physical AEs created by attaching stickers to traffic signs have been reported, which are a threat against the traffic-sign-recognition DNNs used in advanced driver assistance systems (ADAS). We previously proposed an attack method that generates a noise area on images by superimposing an electrical signal on the mobile industry processor interface (MIPI) and showed that it can generate a single adversarial mark that triggers a backdoor attack on the input image. As the advanced approach, we propose the targeted misclassification attack method on DNN by the AEs which are generated by small perturbations to various places on the image by the fault injection. The perturbation position for AEs is precalculated in advance against the target traffic-sign image, which will be captured on future driving. The perturbation image (5.2-5.5% area is tampered with) is successfully created by the fault injection attack on MIPI, which is connected to Raspberry Pi. As the experimental results, we confirmed that the traffic-sign-recognition DNN on a Raspberry Pi was successfully misclassified when the target traffic sign was captured.","PeriodicalId":207435,"journal":{"name":"2022 Asian Hardware Oriented Security and Trust Symposium (AsianHOST)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-12-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 Asian Hardware Oriented Security and Trust Symposium (AsianHOST)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/AsianHOST56390.2022.10022189","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
Adversarial examples (AEs), which cause misclassification by adding subtle perturbations to input images, have been proposed as an attack method on image classification systems using deep neural networks (DNNs). Physical AEs created by attaching stickers to traffic signs have been reported, which are a threat against the traffic-sign-recognition DNNs used in advanced driver assistance systems (ADAS). We previously proposed an attack method that generates a noise area on images by superimposing an electrical signal on the mobile industry processor interface (MIPI) and showed that it can generate a single adversarial mark that triggers a backdoor attack on the input image. As the advanced approach, we propose the targeted misclassification attack method on DNN by the AEs which are generated by small perturbations to various places on the image by the fault injection. The perturbation position for AEs is precalculated in advance against the target traffic-sign image, which will be captured on future driving. The perturbation image (5.2-5.5% area is tampered with) is successfully created by the fault injection attack on MIPI, which is connected to Raspberry Pi. As the experimental results, we confirmed that the traffic-sign-recognition DNN on a Raspberry Pi was successfully misclassified when the target traffic sign was captured.