Duy-Thanh Nguyen, Changhong Min, Nhut-Minh Ho, I. Chang
{"title":"DRAMA","authors":"Duy-Thanh Nguyen, Changhong Min, Nhut-Minh Ho, I. Chang","doi":"10.1145/3400302.3415637","DOIUrl":null,"url":null,"abstract":"As the density of DRAM becomes larger, the refresh overhead becomes more significant. This becomes more problematic in the systems that require large DRAM capacity, such as the training of deep neural networks (DNNs). To solve this problem, we present DRAMA, a novel architecture which employs the approximate characteristic of DNNs. We make that non-critical bits are not refreshed while critical bits are normally refreshed. The refresh time of the critical bits are concealed by employing per-bank refreshes, significantly improving the training system performance. Furthermore, the potential racing hazard of bank-refresh technique is simply prevented by our novel command scheduler in DRAM controllers. Our experiments on various recent DNNs show that DRAMA can improve the training system performance by 10.4% and save 23.77% DRAM energy compared to the conventional architecture.","PeriodicalId":367868,"journal":{"name":"Proceedings of the 39th International Conference on Computer-Aided Design","volume":"150 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 39th International Conference on Computer-Aided Design","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3400302.3415637","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
As the density of DRAM becomes larger, the refresh overhead becomes more significant. This becomes more problematic in the systems that require large DRAM capacity, such as the training of deep neural networks (DNNs). To solve this problem, we present DRAMA, a novel architecture which employs the approximate characteristic of DNNs. We make that non-critical bits are not refreshed while critical bits are normally refreshed. The refresh time of the critical bits are concealed by employing per-bank refreshes, significantly improving the training system performance. Furthermore, the potential racing hazard of bank-refresh technique is simply prevented by our novel command scheduler in DRAM controllers. Our experiments on various recent DNNs show that DRAMA can improve the training system performance by 10.4% and save 23.77% DRAM energy compared to the conventional architecture.