Approximate Programming Design for Enhancing Energy, Endurance and Performance of Neural Network Training on NVM-based Systems

Chien-Chung Ho, Wei-Chen Wang, Te-Hao Hsu, Zhi-Duan Jiang, Yung-Chun Li
{"title":"Approximate Programming Design for Enhancing Energy, Endurance and Performance of Neural Network Training on NVM-based Systems","authors":"Chien-Chung Ho, Wei-Chen Wang, Te-Hao Hsu, Zhi-Duan Jiang, Yung-Chun Li","doi":"10.1109/nvmsa53655.2021.9628582","DOIUrl":null,"url":null,"abstract":"Recently, it is found non-volatile memories (NVMs) offer opportunities for mitigating issues of neural network training on DRAM-based systems by taking advantage of its near-zero leakage power and high scalability properties. However, it brings the new challenges on energy consumption, lifetime and performance degradation caused by the massive weight/bias updates performed during training phases. To tackle these issues, this work proposes an approximate write-once memory (WOM) code method with considering the characteristics of weight updates and error tolerability of NNs. In particular, the proposed method aims to effectively reduce the number of writes on NVMs. The experimental results demonstrate that great enhancement on energy consumption, endurance and write performance can be simultaneously achieved without sacrificing the inference accuracy.","PeriodicalId":122428,"journal":{"name":"2021 IEEE 10th Non-Volatile Memory Systems and Applications Symposium (NVMSA)","volume":"51 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-08-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE 10th Non-Volatile Memory Systems and Applications Symposium (NVMSA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/nvmsa53655.2021.9628582","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2

Abstract

Recently, it is found non-volatile memories (NVMs) offer opportunities for mitigating issues of neural network training on DRAM-based systems by taking advantage of its near-zero leakage power and high scalability properties. However, it brings the new challenges on energy consumption, lifetime and performance degradation caused by the massive weight/bias updates performed during training phases. To tackle these issues, this work proposes an approximate write-once memory (WOM) code method with considering the characteristics of weight updates and error tolerability of NNs. In particular, the proposed method aims to effectively reduce the number of writes on NVMs. The experimental results demonstrate that great enhancement on energy consumption, endurance and write performance can be simultaneously achieved without sacrificing the inference accuracy.
基于nvm系统的神经网络训练能量、耐力和性能的近似规划设计
最近,人们发现非易失性存储器(NVMs)利用其接近零泄漏功率和高可扩展性的特性,为基于dram的系统提供了缓解神经网络训练问题的机会。然而,由于在训练阶段进行了大量的权重/偏差更新,这给能量消耗、寿命和性能下降带来了新的挑战。为了解决这些问题,本工作提出了一种近似的一次性写入存储器(WOM)编码方法,同时考虑了神经网络的权重更新特性和容错性。特别是,该方法旨在有效地减少nvm上的写次数。实验结果表明,在不牺牲推理精度的前提下,该方法在能耗、续航时间和写入性能上都有了很大的提高。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信