{"title":"Work-in-Progress: BloCirNN: An Efficient Software/hardware Codesign Approach for Neural Network Accelerators with Block-Circulant Matrix","authors":"Yu Qin, Lei Gong, Zhendong Zheng, Chao Wang","doi":"10.1109/CODES-ISSS55005.2022.00010","DOIUrl":null,"url":null,"abstract":"Nowadays, the scale of deep neural networks is getting larger and larger. These large-scale deep neural networks are both compute and memory intensive. To overcome these problems, we use block-circulant weight matrices and Fast Fourier Transform (FFT) to compress model and optimize computation. Compared to weight pruning, this method does not suffer from irregular networks. The main contributions of this paper include the implementation of a convolution module and a fully-connected module with High-Level Synthesis (HLS), deployment and performance test on FPGA platform. We use AlexNet as a case study, which demonstrates our design is more efficient than the FPGA2016.","PeriodicalId":129167,"journal":{"name":"2022 International Conference on Hardware/Software Codesign and System Synthesis (CODES+ISSS)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 International Conference on Hardware/Software Codesign and System Synthesis (CODES+ISSS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CODES-ISSS55005.2022.00010","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Nowadays, the scale of deep neural networks is getting larger and larger. These large-scale deep neural networks are both compute and memory intensive. To overcome these problems, we use block-circulant weight matrices and Fast Fourier Transform (FFT) to compress model and optimize computation. Compared to weight pruning, this method does not suffer from irregular networks. The main contributions of this paper include the implementation of a convolution module and a fully-connected module with High-Level Synthesis (HLS), deployment and performance test on FPGA platform. We use AlexNet as a case study, which demonstrates our design is more efficient than the FPGA2016.