Chien-Chung Ho, Wei-Chen Wang, Te-Hao Hsu, Zhi-Duan Jiang, Yung-Chun Li
{"title":"Approximate Programming Design for Enhancing Energy, Endurance and Performance of Neural Network Training on NVM-based Systems","authors":"Chien-Chung Ho, Wei-Chen Wang, Te-Hao Hsu, Zhi-Duan Jiang, Yung-Chun Li","doi":"10.1109/nvmsa53655.2021.9628582","DOIUrl":null,"url":null,"abstract":"Recently, it is found non-volatile memories (NVMs) offer opportunities for mitigating issues of neural network training on DRAM-based systems by taking advantage of its near-zero leakage power and high scalability properties. However, it brings the new challenges on energy consumption, lifetime and performance degradation caused by the massive weight/bias updates performed during training phases. To tackle these issues, this work proposes an approximate write-once memory (WOM) code method with considering the characteristics of weight updates and error tolerability of NNs. In particular, the proposed method aims to effectively reduce the number of writes on NVMs. The experimental results demonstrate that great enhancement on energy consumption, endurance and write performance can be simultaneously achieved without sacrificing the inference accuracy.","PeriodicalId":122428,"journal":{"name":"2021 IEEE 10th Non-Volatile Memory Systems and Applications Symposium (NVMSA)","volume":"51 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-08-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE 10th Non-Volatile Memory Systems and Applications Symposium (NVMSA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/nvmsa53655.2021.9628582","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2
Abstract
Recently, it is found non-volatile memories (NVMs) offer opportunities for mitigating issues of neural network training on DRAM-based systems by taking advantage of its near-zero leakage power and high scalability properties. However, it brings the new challenges on energy consumption, lifetime and performance degradation caused by the massive weight/bias updates performed during training phases. To tackle these issues, this work proposes an approximate write-once memory (WOM) code method with considering the characteristics of weight updates and error tolerability of NNs. In particular, the proposed method aims to effectively reduce the number of writes on NVMs. The experimental results demonstrate that great enhancement on energy consumption, endurance and write performance can be simultaneously achieved without sacrificing the inference accuracy.