ApproxDNNFlow: An Evaluation and Exploration Framework for DNNs with Approximate Multipliers

Jide Zhang, Su Zheng, Lingli Wang
{"title":"ApproxDNNFlow: An Evaluation and Exploration Framework for DNNs with Approximate Multipliers","authors":"Jide Zhang, Su Zheng, Lingli Wang","doi":"10.1109/CSTIC52283.2021.9461574","DOIUrl":null,"url":null,"abstract":"Widely used deep neural networks (DNNs) are proved error-tolerant, therefore accurate multipliers in DNNs can be replaced by approximate multipliers to reduce the power consumption. We set up a framework for training and evaluating DNNs based on approximate multipliers. Noisy training is proposed to adjust the parameters to tolerate the error caused by the approximate multipliers. Moreover, the framework can evaluate DNN accuracies with approximate multipliers. In the experiment, four approximate multipliers are evaluated. Based on the DNN inference results on MNIST and CIFAR10 by LeNet, the selected approximate multiplier can reach 99.17% and 65.76% accuracies respectively (the original accuracies are 99.27% and 74.88%) with significant reduction of the power consumption and area. In addition, the inference accuracies can be improved up to 99.21% and 69.5% by the proposed noise training methods. The proposed framework can contribute to the design of effective approximate computing for DNNs in the future.","PeriodicalId":186529,"journal":{"name":"2021 China Semiconductor Technology International Conference (CSTIC)","volume":"157 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 China Semiconductor Technology International Conference (CSTIC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CSTIC52283.2021.9461574","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

Abstract

Widely used deep neural networks (DNNs) are proved error-tolerant, therefore accurate multipliers in DNNs can be replaced by approximate multipliers to reduce the power consumption. We set up a framework for training and evaluating DNNs based on approximate multipliers. Noisy training is proposed to adjust the parameters to tolerate the error caused by the approximate multipliers. Moreover, the framework can evaluate DNN accuracies with approximate multipliers. In the experiment, four approximate multipliers are evaluated. Based on the DNN inference results on MNIST and CIFAR10 by LeNet, the selected approximate multiplier can reach 99.17% and 65.76% accuracies respectively (the original accuracies are 99.27% and 74.88%) with significant reduction of the power consumption and area. In addition, the inference accuracies can be improved up to 99.21% and 69.5% by the proposed noise training methods. The proposed framework can contribute to the design of effective approximate computing for DNNs in the future.
ApproxDNNFlow:一个带有近似乘数的深度神经网络的评估和探索框架
广泛应用的深度神经网络(dnn)被证明具有容错性,因此dnn中的精确乘法器可以被近似乘法器取代以降低功耗。我们建立了一个基于近似乘数的dnn训练和评估框架。提出了噪声训练来调整参数以容忍近似乘法器引起的误差。此外,该框架可以用近似乘数来评估深度神经网络的精度。在实验中,评估了四个近似乘数。基于LeNet在MNIST和CIFAR10上的DNN推理结果,所选择的近似乘法器准确率分别达到99.17%和65.76%(原始准确率分别为99.27%和74.88%),功耗和面积显著降低。此外,所提出的噪声训练方法可将推理准确率分别提高99.21%和69.5%。所提出的框架可以为将来设计有效的深度神经网络近似计算做出贡献。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信