{"title":"Effect of fixed-point arithmetic on deep belief networks (abstract only)","authors":"Jingfei Jiang, Rongdong Hu, M. Luján","doi":"10.1145/2435264.2435331","DOIUrl":null,"url":null,"abstract":"Deep Belief Networks (DBNs) are state-of-the-art learning algorithms building on a subset of neural networks, Restricted Boltzmann Machine (RBM). DBNs are computationally intensive posing the question of whether DBNs can be FPGA accelerated. Fixed-point arithmetic can have an important influence on the execution time and prediction accuracy of a DBN. Previous studies have focused only on customized RBM accelerators with a fixed data-width. Our results experiments demonstrate that variable data-widths can obtain similar performance levels. We can also observe that the most suitable data-widths for different types of DBN are not unique or fixed. From this we conclude that a DBN accelerator should support various data-widths rather than only fixed one as done in previous work. The processing performance of DBN accelerators in FPGA is almost always constrained not by the capacity of the processing units, but by their on-chip RAM capacity and speed. We propose an efficient memory sub-system combining junction and padding methods to reduce bandwidth usage for DBN accelerators, which shows that supporting various data-widths is not as difficult as it may sound. The cost is only little in hardware terms and does not affect the critical path. We design a generation tool to help users reconfiguring the memory sub-system with arbitrary data-width flexibly. Our tool can also be used as an advanced IP core generator above FPGA memory controller supporting parallel memory access in irregular data-width for other applications.","PeriodicalId":87257,"journal":{"name":"FPGA. ACM International Symposium on Field-Programmable Gate Arrays","volume":"11 1","pages":"273"},"PeriodicalIF":0.0000,"publicationDate":"2013-02-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"FPGA. ACM International Symposium on Field-Programmable Gate Arrays","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/2435264.2435331","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Deep Belief Networks (DBNs) are state-of-the-art learning algorithms building on a subset of neural networks, Restricted Boltzmann Machine (RBM). DBNs are computationally intensive posing the question of whether DBNs can be FPGA accelerated. Fixed-point arithmetic can have an important influence on the execution time and prediction accuracy of a DBN. Previous studies have focused only on customized RBM accelerators with a fixed data-width. Our results experiments demonstrate that variable data-widths can obtain similar performance levels. We can also observe that the most suitable data-widths for different types of DBN are not unique or fixed. From this we conclude that a DBN accelerator should support various data-widths rather than only fixed one as done in previous work. The processing performance of DBN accelerators in FPGA is almost always constrained not by the capacity of the processing units, but by their on-chip RAM capacity and speed. We propose an efficient memory sub-system combining junction and padding methods to reduce bandwidth usage for DBN accelerators, which shows that supporting various data-widths is not as difficult as it may sound. The cost is only little in hardware terms and does not affect the critical path. We design a generation tool to help users reconfiguring the memory sub-system with arbitrary data-width flexibly. Our tool can also be used as an advanced IP core generator above FPGA memory controller supporting parallel memory access in irregular data-width for other applications.