BNN-Flip: Enhancing the Fault Tolerance and Security of Compute-in-Memory Enabled Binary Neural Network Accelerators

Akul Malhotra, Chunguang Wang, Sumeet Kumar Gupta
{"title":"BNN-Flip: Enhancing the Fault Tolerance and Security of Compute-in-Memory Enabled Binary Neural Network Accelerators","authors":"Akul Malhotra, Chunguang Wang, Sumeet Kumar Gupta","doi":"10.1109/ASP-DAC58780.2024.10473947","DOIUrl":null,"url":null,"abstract":"Compute-in-memory based binary neural networks or CiM-BNNs offer high energy/area efficiency for the design of edge deep neural network (DNN) accelerators, with only a mild accuracy reduction. However, for successful deployment, the design of CiM-BNNs must consider challenges such as memory faults and data security that plague existing DNN accelerators. In this work, we aim to mitigate both these problems simultaneously by proposing BNN-Flip, a training-free weight transformation algorithm that not only enhances the fault tolerance of CiM-BNNs but also protects them from weight theft attacks. BNN-Flip inverts the rows and columns of the BNN weight matrix in a way that reduces the impact of memory faults on the CiM-BNN’s inference accuracy, while preserving the correctness of the CiM operation. Concurrently, our technique encodes the CiM-BNN weights, securing them from weight theft. Our experiments on various CiM-BNNs show that BNN-Flip achieves an inference accuracy increase of up to 10.55% over the baseline (i.e. CiM-BNNs not employing BNN-Flip) in the presence of memory faults. Additionally, we show that the encoded weights generated by BNN-Flip furnish extremely low (near ‘random guess’) inference accuracy for the adversary attempting weight theft. The benefits of BNN-Flip come with an energy overhead of < 3%.","PeriodicalId":518586,"journal":{"name":"2024 29th Asia and South Pacific Design Automation Conference (ASP-DAC)","volume":"54 7-8","pages":"146-152"},"PeriodicalIF":0.0000,"publicationDate":"2024-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2024 29th Asia and South Pacific Design Automation Conference (ASP-DAC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ASP-DAC58780.2024.10473947","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Compute-in-memory based binary neural networks or CiM-BNNs offer high energy/area efficiency for the design of edge deep neural network (DNN) accelerators, with only a mild accuracy reduction. However, for successful deployment, the design of CiM-BNNs must consider challenges such as memory faults and data security that plague existing DNN accelerators. In this work, we aim to mitigate both these problems simultaneously by proposing BNN-Flip, a training-free weight transformation algorithm that not only enhances the fault tolerance of CiM-BNNs but also protects them from weight theft attacks. BNN-Flip inverts the rows and columns of the BNN weight matrix in a way that reduces the impact of memory faults on the CiM-BNN’s inference accuracy, while preserving the correctness of the CiM operation. Concurrently, our technique encodes the CiM-BNN weights, securing them from weight theft. Our experiments on various CiM-BNNs show that BNN-Flip achieves an inference accuracy increase of up to 10.55% over the baseline (i.e. CiM-BNNs not employing BNN-Flip) in the presence of memory faults. Additionally, we show that the encoded weights generated by BNN-Flip furnish extremely low (near ‘random guess’) inference accuracy for the adversary attempting weight theft. The benefits of BNN-Flip come with an energy overhead of < 3%.
BNN-Flip:增强支持内存计算的二元神经网络加速器的容错性和安全性
基于内存计算的二元神经网络(CiM-BNN)为边缘深度神经网络(DNN)加速器的设计提供了较高的能量/面积效率,同时仅会轻微降低精度。然而,为了成功部署,CiM-BNNs 的设计必须考虑内存故障和数据安全等困扰现有 DNN 加速器的挑战。在这项工作中,我们提出了一种无需训练的权重转换算法 BNN-Flip,旨在同时缓解这两个问题,该算法不仅能增强 CiM-BNN 的容错性,还能保护它们免受权重窃取攻击。BNN-Flip 会反转 BNN 权重矩阵的行和列,从而降低内存故障对 CiM-BNN 推断准确性的影响,同时保持 CiM 操作的正确性。同时,我们的技术还对 CiM-BNN 权重进行了编码,确保它们不会被窃取权重。我们在各种 CiM-BNN 上进行的实验表明,在存在内存故障的情况下,BNN-Flip 比基线(即未采用 BNN-Flip 的 CiM-BNN)的推理准确率最高提高了 10.55%。此外,我们还表明,BNN-Flip 生成的编码权重为试图窃取权重的对手提供了极低(接近 "随机猜测")的推断准确率。BNN-Flip 所带来的好处是能量开销小于 3%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信