Takeo Ueki, Keisuke Iwai, T. Matsubara, T. Kurokawa
{"title":"AQSS: Accelerator of Quantization Neural Networks with Stochastic Approach","authors":"Takeo Ueki, Keisuke Iwai, T. Matsubara, T. Kurokawa","doi":"10.1109/CANDARW.2018.00033","DOIUrl":null,"url":null,"abstract":"In recent years, Deep Neural Network (DNN)s have become widely spread. Several high-throughput hardware implementations for DNNs have been proposed. One of the key points for hardware implementations of DNNs is to reduce their power consumption because DNNs require a lot of product-sum operations. Previous papers presented some accelerators using logarithmic quantization to reduce the power consumption by replacing multipliers with shifters. However, most of them are implemented only for inference. In this paper, an Accelerator of Quantization neural networkS with Stochastic approach (AQSS) is proposed. It uses a stochastic approach for logarithmic quantization, and enables DNNs to infer or to learn using logarithmic quantization. A prototype of AQSS is implemented on a field-programmable gate array (FPGA) (Intel Arria 10 GX 1150) and synthesized with Intel Quartus Prime 17.1 Standard Edition. As a result, it is confirmed to have 1.8 times the power efficiency of GPU.","PeriodicalId":329439,"journal":{"name":"2018 Sixth International Symposium on Computing and Networking Workshops (CANDARW)","volume":"64 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2018 Sixth International Symposium on Computing and Networking Workshops (CANDARW)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CANDARW.2018.00033","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
In recent years, Deep Neural Network (DNN)s have become widely spread. Several high-throughput hardware implementations for DNNs have been proposed. One of the key points for hardware implementations of DNNs is to reduce their power consumption because DNNs require a lot of product-sum operations. Previous papers presented some accelerators using logarithmic quantization to reduce the power consumption by replacing multipliers with shifters. However, most of them are implemented only for inference. In this paper, an Accelerator of Quantization neural networkS with Stochastic approach (AQSS) is proposed. It uses a stochastic approach for logarithmic quantization, and enables DNNs to infer or to learn using logarithmic quantization. A prototype of AQSS is implemented on a field-programmable gate array (FPGA) (Intel Arria 10 GX 1150) and synthesized with Intel Quartus Prime 17.1 Standard Edition. As a result, it is confirmed to have 1.8 times the power efficiency of GPU.