{"title":"基于可伸缩DRAM刷新控制器和关键位保护的高效dnn训练","authors":"Duy-Thanh Nguyen, I. Chang","doi":"10.1109/ISOCC47750.2019.9078532","DOIUrl":null,"url":null,"abstract":"Training DNN is a time-consuming process and requires intensive memory resources. Many software-based approaches were proposed to improve the performance and energy efficiency of inferring DNNs. Meanwhile training hardware is still received limited attention. In this work, we present a novel DRAM architecture with critical-bit protection. Our method targets main memory and graphical memory of the training system. Experimented on GEM5-GPGPUsim, our proposed DRAM architecture can achieve 23% and 12% DRAM energy reduction with floating point 32bit on main and graphical memories, respectively. Also, it further improves system's performance by 0.43˜ 4.12% while maintaining a negligible accuracy drops in training DNNs.","PeriodicalId":113802,"journal":{"name":"2019 International SoC Design Conference (ISOCC)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-10-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Energy-efficient DNN-training with Stretchable DRAM Refresh Controller and Critical-bit Protection\",\"authors\":\"Duy-Thanh Nguyen, I. Chang\",\"doi\":\"10.1109/ISOCC47750.2019.9078532\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Training DNN is a time-consuming process and requires intensive memory resources. Many software-based approaches were proposed to improve the performance and energy efficiency of inferring DNNs. Meanwhile training hardware is still received limited attention. In this work, we present a novel DRAM architecture with critical-bit protection. Our method targets main memory and graphical memory of the training system. Experimented on GEM5-GPGPUsim, our proposed DRAM architecture can achieve 23% and 12% DRAM energy reduction with floating point 32bit on main and graphical memories, respectively. Also, it further improves system's performance by 0.43˜ 4.12% while maintaining a negligible accuracy drops in training DNNs.\",\"PeriodicalId\":113802,\"journal\":{\"name\":\"2019 International SoC Design Conference (ISOCC)\",\"volume\":\"32 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2019-10-06\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2019 International SoC Design Conference (ISOCC)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ISOCC47750.2019.9078532\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 International SoC Design Conference (ISOCC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ISOCC47750.2019.9078532","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Energy-efficient DNN-training with Stretchable DRAM Refresh Controller and Critical-bit Protection
Training DNN is a time-consuming process and requires intensive memory resources. Many software-based approaches were proposed to improve the performance and energy efficiency of inferring DNNs. Meanwhile training hardware is still received limited attention. In this work, we present a novel DRAM architecture with critical-bit protection. Our method targets main memory and graphical memory of the training system. Experimented on GEM5-GPGPUsim, our proposed DRAM architecture can achieve 23% and 12% DRAM energy reduction with floating point 32bit on main and graphical memories, respectively. Also, it further improves system's performance by 0.43˜ 4.12% while maintaining a negligible accuracy drops in training DNNs.