{"title":"稀疏卷积神经网络的高效加速器","authors":"Weijie You, Chang Wu","doi":"10.1109/ASICON47005.2019.8983560","DOIUrl":null,"url":null,"abstract":"In this paper, we propose a sparse convolutional neural network accelerator design on FPGAs. Similar to the DNNWEAVER architecture, our accelerator uses two-level hierarchy: multiple Processing Units (PUs) and each PU comprises a set of Processing Elements (PEs). To address the irregularity of sparse neural networks, we introduce a novel sparse dataflow for sparse CNN computing as well as weight merging method to balance the computation load on different PUs for better overall efficiency. We implement our design with 32 PUs and 14 PEs in each PU. When compared with DNNWEAVER on VGG16 network, our accelerator achieves 3.49x speedup and 3.05x energy saving on average when running at 150MHz on a Xilinx ZC706 board and reaches the speed of 400 GOPS.","PeriodicalId":319342,"journal":{"name":"2019 IEEE 13th International Conference on ASIC (ASICON)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"6","resultStr":"{\"title\":\"An Efficient Accelerator for Sparse Convolutional Neural Networks\",\"authors\":\"Weijie You, Chang Wu\",\"doi\":\"10.1109/ASICON47005.2019.8983560\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In this paper, we propose a sparse convolutional neural network accelerator design on FPGAs. Similar to the DNNWEAVER architecture, our accelerator uses two-level hierarchy: multiple Processing Units (PUs) and each PU comprises a set of Processing Elements (PEs). To address the irregularity of sparse neural networks, we introduce a novel sparse dataflow for sparse CNN computing as well as weight merging method to balance the computation load on different PUs for better overall efficiency. We implement our design with 32 PUs and 14 PEs in each PU. When compared with DNNWEAVER on VGG16 network, our accelerator achieves 3.49x speedup and 3.05x energy saving on average when running at 150MHz on a Xilinx ZC706 board and reaches the speed of 400 GOPS.\",\"PeriodicalId\":319342,\"journal\":{\"name\":\"2019 IEEE 13th International Conference on ASIC (ASICON)\",\"volume\":\"9 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2019-10-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"6\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2019 IEEE 13th International Conference on ASIC (ASICON)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ASICON47005.2019.8983560\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 IEEE 13th International Conference on ASIC (ASICON)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ASICON47005.2019.8983560","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
An Efficient Accelerator for Sparse Convolutional Neural Networks
In this paper, we propose a sparse convolutional neural network accelerator design on FPGAs. Similar to the DNNWEAVER architecture, our accelerator uses two-level hierarchy: multiple Processing Units (PUs) and each PU comprises a set of Processing Elements (PEs). To address the irregularity of sparse neural networks, we introduce a novel sparse dataflow for sparse CNN computing as well as weight merging method to balance the computation load on different PUs for better overall efficiency. We implement our design with 32 PUs and 14 PEs in each PU. When compared with DNNWEAVER on VGG16 network, our accelerator achieves 3.49x speedup and 3.05x energy saving on average when running at 150MHz on a Xilinx ZC706 board and reaches the speed of 400 GOPS.