Robust and Tiny Binary Neural Networks using Gradient-based Explainability Methods

Muhammad Sabih, Mikail Yayla, Frank Hannig, Jürgen Teich, Jian-Jia Chen
{"title":"Robust and Tiny Binary Neural Networks using Gradient-based Explainability Methods","authors":"Muhammad Sabih, Mikail Yayla, Frank Hannig, Jürgen Teich, Jian-Jia Chen","doi":"10.1145/3578356.3592595","DOIUrl":null,"url":null,"abstract":"Binary neural networks (BNNs) are a highly resource-efficient variant of neural networks. The efficiency of BNNs for tiny machine learning (TinyML) systems can be enhanced by structured pruning and making BNNs robust to faults. When used with approximate memory systems, this fault tolerance can be traded off for energy consumption, latency, or cost. For pruning, magnitude-based heuristics are not useful because the weights in a BNN can either be -1 or +1. Global pruning of BNNs has not been studied well so far. Thus, in this paper, we explore gradient-based ranking criteria for pruning BNNs and use them in combination with a sensitivity analysis. For robustness, the state-of-the-art is to train the BNNs with bit-flips in what is known as fault-aware training. We propose a method to guide fault-aware training using gradient-based explainability methods. This allows us to obtain robust and efficient BNNs for deployment on tiny devices. Experiments on audio and image processing applications show that our proposed approach outperforms the existing approaches, making it useful for obtaining efficient and robust models for a slight degradation in accuracy. This makes our approach valuable for many TinyML use cases.","PeriodicalId":370204,"journal":{"name":"Proceedings of the 3rd Workshop on Machine Learning and Systems","volume":"3 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-05-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 3rd Workshop on Machine Learning and Systems","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3578356.3592595","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

Abstract

Binary neural networks (BNNs) are a highly resource-efficient variant of neural networks. The efficiency of BNNs for tiny machine learning (TinyML) systems can be enhanced by structured pruning and making BNNs robust to faults. When used with approximate memory systems, this fault tolerance can be traded off for energy consumption, latency, or cost. For pruning, magnitude-based heuristics are not useful because the weights in a BNN can either be -1 or +1. Global pruning of BNNs has not been studied well so far. Thus, in this paper, we explore gradient-based ranking criteria for pruning BNNs and use them in combination with a sensitivity analysis. For robustness, the state-of-the-art is to train the BNNs with bit-flips in what is known as fault-aware training. We propose a method to guide fault-aware training using gradient-based explainability methods. This allows us to obtain robust and efficient BNNs for deployment on tiny devices. Experiments on audio and image processing applications show that our proposed approach outperforms the existing approaches, making it useful for obtaining efficient and robust models for a slight degradation in accuracy. This makes our approach valuable for many TinyML use cases.
基于梯度可解释性方法的鲁棒微型二值神经网络
二值神经网络(BNNs)是一种资源高效的神经网络变体。对于微型机器学习(TinyML)系统,可以通过结构化修剪和使bnn对故障具有鲁棒性来提高bnn的效率。当与近似内存系统一起使用时,可以用这种容错性来换取能耗、延迟或成本。对于剪枝,基于大小的启发式方法是没有用的,因为BNN中的权重可以是-1或+1。目前对bnn全局剪枝的研究还不够深入。因此,在本文中,我们探索了基于梯度的排序标准来修剪bnn,并将其与灵敏度分析结合使用。对于鲁棒性,最先进的方法是用比特翻转训练bnn,这就是所谓的故障感知训练。我们提出了一种使用基于梯度的可解释性方法来指导故障感知训练的方法。这使我们能够获得健壮而高效的bnn,用于在微型设备上部署。音频和图像处理应用的实验表明,我们提出的方法优于现有的方法,使其能够在精度略有下降的情况下获得有效和鲁棒的模型。这使得我们的方法对许多TinyML用例都很有价值。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信