LaBaNI: Layer-based Noise Injection Attack on Convolutional Neural Networks

Tolulope A. Odetola, Faiq Khalid, S. R. Hasan
{"title":"LaBaNI: Layer-based Noise Injection Attack on Convolutional Neural Networks","authors":"Tolulope A. Odetola, Faiq Khalid, S. R. Hasan","doi":"10.1145/3526241.3530385","DOIUrl":null,"url":null,"abstract":"Hardware accelerator-based CNN inference improves the performance and latency but increases the time-to-market. As a result, CNN deployment on hardware is often outsourced to untrusted third parties (3Ps) with security risks, like hardware Trojans (HTs). Therefore, during the outsourcing, designers conceal the information about initial and final CNN layers from 3Ps. However, this paper shows that this solution is ineffective by proposing a hardware-intrinsic attack (HIA), Layer-based Noise Injection (LaBaNI), which successfully performs misclassification without knowing the initial and final layers. LaBaNi uses the statistical properties of feature maps of the CNN to design the trigger with a very low triggering probability and a payload for misclassification. To show the effectiveness of LaBaNI, we demonstrated it on LeNet and LeNet-3D CNN models deployed on Xilinx's PYNQ board. In the experimental results, the attack is successful, non-periodic, and random, hence difficult to detect. Results show that LaBaNI utilizes up to 4% extra LUTs, 5% extra DSPs, and 2% extra FFs, respectively.","PeriodicalId":188228,"journal":{"name":"Proceedings of the Great Lakes Symposium on VLSI 2022","volume":"12 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the Great Lakes Symposium on VLSI 2022","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3526241.3530385","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Hardware accelerator-based CNN inference improves the performance and latency but increases the time-to-market. As a result, CNN deployment on hardware is often outsourced to untrusted third parties (3Ps) with security risks, like hardware Trojans (HTs). Therefore, during the outsourcing, designers conceal the information about initial and final CNN layers from 3Ps. However, this paper shows that this solution is ineffective by proposing a hardware-intrinsic attack (HIA), Layer-based Noise Injection (LaBaNI), which successfully performs misclassification without knowing the initial and final layers. LaBaNi uses the statistical properties of feature maps of the CNN to design the trigger with a very low triggering probability and a payload for misclassification. To show the effectiveness of LaBaNI, we demonstrated it on LeNet and LeNet-3D CNN models deployed on Xilinx's PYNQ board. In the experimental results, the attack is successful, non-periodic, and random, hence difficult to detect. Results show that LaBaNI utilizes up to 4% extra LUTs, 5% extra DSPs, and 2% extra FFs, respectively.
基于层的卷积神经网络噪声注入攻击
基于硬件加速器的CNN推理提高了性能和延迟,但增加了上市时间。因此,CNN在硬件上的部署通常外包给不受信任的第三方(3p),这些第三方存在安全风险,比如硬件木马(ht)。因此,在外包过程中,设计师对第三方隐瞒了CNN初始层和最终层的信息。然而,本文通过提出一种硬件固有攻击(HIA),基于层的噪声注入(LaBaNI),表明该解决方案是无效的,该方法在不知道初始层和最终层的情况下成功地执行错误分类。LaBaNi利用CNN的特征图的统计特性,设计了触发概率极低的触发器和误分类有效载荷。为了展示LaBaNI的有效性,我们在部署在赛灵思PYNQ板上的LeNet和LeNet- 3d CNN模型上进行了演示。在实验结果中,攻击是成功的,非周期性和随机性,因此难以检测。结果表明,LaBaNI分别利用高达4%的额外lut, 5%的额外dsp和2%的额外ff。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信