{"title":"LaBaNI: Layer-based Noise Injection Attack on Convolutional Neural Networks","authors":"Tolulope A. Odetola, Faiq Khalid, S. R. Hasan","doi":"10.1145/3526241.3530385","DOIUrl":null,"url":null,"abstract":"Hardware accelerator-based CNN inference improves the performance and latency but increases the time-to-market. As a result, CNN deployment on hardware is often outsourced to untrusted third parties (3Ps) with security risks, like hardware Trojans (HTs). Therefore, during the outsourcing, designers conceal the information about initial and final CNN layers from 3Ps. However, this paper shows that this solution is ineffective by proposing a hardware-intrinsic attack (HIA), Layer-based Noise Injection (LaBaNI), which successfully performs misclassification without knowing the initial and final layers. LaBaNi uses the statistical properties of feature maps of the CNN to design the trigger with a very low triggering probability and a payload for misclassification. To show the effectiveness of LaBaNI, we demonstrated it on LeNet and LeNet-3D CNN models deployed on Xilinx's PYNQ board. In the experimental results, the attack is successful, non-periodic, and random, hence difficult to detect. Results show that LaBaNI utilizes up to 4% extra LUTs, 5% extra DSPs, and 2% extra FFs, respectively.","PeriodicalId":188228,"journal":{"name":"Proceedings of the Great Lakes Symposium on VLSI 2022","volume":"12 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the Great Lakes Symposium on VLSI 2022","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3526241.3530385","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Hardware accelerator-based CNN inference improves the performance and latency but increases the time-to-market. As a result, CNN deployment on hardware is often outsourced to untrusted third parties (3Ps) with security risks, like hardware Trojans (HTs). Therefore, during the outsourcing, designers conceal the information about initial and final CNN layers from 3Ps. However, this paper shows that this solution is ineffective by proposing a hardware-intrinsic attack (HIA), Layer-based Noise Injection (LaBaNI), which successfully performs misclassification without knowing the initial and final layers. LaBaNi uses the statistical properties of feature maps of the CNN to design the trigger with a very low triggering probability and a payload for misclassification. To show the effectiveness of LaBaNI, we demonstrated it on LeNet and LeNet-3D CNN models deployed on Xilinx's PYNQ board. In the experimental results, the attack is successful, non-periodic, and random, hence difficult to detect. Results show that LaBaNI utilizes up to 4% extra LUTs, 5% extra DSPs, and 2% extra FFs, respectively.
基于硬件加速器的CNN推理提高了性能和延迟,但增加了上市时间。因此,CNN在硬件上的部署通常外包给不受信任的第三方(3p),这些第三方存在安全风险,比如硬件木马(ht)。因此,在外包过程中,设计师对第三方隐瞒了CNN初始层和最终层的信息。然而,本文通过提出一种硬件固有攻击(HIA),基于层的噪声注入(LaBaNI),表明该解决方案是无效的,该方法在不知道初始层和最终层的情况下成功地执行错误分类。LaBaNi利用CNN的特征图的统计特性,设计了触发概率极低的触发器和误分类有效载荷。为了展示LaBaNI的有效性,我们在部署在赛灵思PYNQ板上的LeNet和LeNet- 3d CNN模型上进行了演示。在实验结果中,攻击是成功的,非周期性和随机性,因此难以检测。结果表明,LaBaNI分别利用高达4%的额外lut, 5%的额外dsp和2%的额外ff。