{"title":"A Low-Power Deconvolutional Accelerator for Convolutional Neural Network Based Segmentation on FPGA: Abstract Only","authors":"Shuanglong Liu, Xinyu Niu, W. Luk","doi":"10.1145/3174243.3174991","DOIUrl":null,"url":null,"abstract":"Convolutional Neural Networks (CNNs) based algorithms have been successful in solving image recognition problems, showing very large accuracy improvement. In recent years, deconvolution layers are widely used as key components in the state-of-the-art CNNs for end-to-end training and models to support tasks such as image segmentation. However, the deconvolution algorithms are computationally intensive which limits their applicability to real time applications. Particularly, there has been little research on the efficient implementations of deconvolution algorithms on FPGA platforms. In this work, we propose and develop fully customized deconvolution architecture for CNN-based segmentation algorithms. Besides, memory sharing between the computation modules is proposed for the FPGA-based CNN accelerator as well as for other optimization techniques. Furthermore, a hardware mapping framework is developed to automatically generate the high-throughput hardware design for any given CNN model on the target device. Finally, we implement our designs on Xilinx Zynq-7030 and the deconvolution accelerator achieves a performance of 25.6 GOPS under 200MHz working frequency and a performance density of 0.064 GOPS/DSP using 32-bit quantization, which significantly outperforms previous designs on FPGAs. A real-time application of scene segmentation on Cityscapes Dataset is used to evaluate our CNN accelerator on Zynq-7030 board, and the system achieves a performance of 57.2 GOPS and 0.143 GOPS/DSP using 16-bit quantization, and supports up to 2 frames per second for 512x512 image inputs with a power consumption of only 3.2W.","PeriodicalId":164936,"journal":{"name":"Proceedings of the 2018 ACM/SIGDA International Symposium on Field-Programmable Gate Arrays","volume":"2021 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-02-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2018 ACM/SIGDA International Symposium on Field-Programmable Gate Arrays","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3174243.3174991","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3
Abstract
Convolutional Neural Networks (CNNs) based algorithms have been successful in solving image recognition problems, showing very large accuracy improvement. In recent years, deconvolution layers are widely used as key components in the state-of-the-art CNNs for end-to-end training and models to support tasks such as image segmentation. However, the deconvolution algorithms are computationally intensive which limits their applicability to real time applications. Particularly, there has been little research on the efficient implementations of deconvolution algorithms on FPGA platforms. In this work, we propose and develop fully customized deconvolution architecture for CNN-based segmentation algorithms. Besides, memory sharing between the computation modules is proposed for the FPGA-based CNN accelerator as well as for other optimization techniques. Furthermore, a hardware mapping framework is developed to automatically generate the high-throughput hardware design for any given CNN model on the target device. Finally, we implement our designs on Xilinx Zynq-7030 and the deconvolution accelerator achieves a performance of 25.6 GOPS under 200MHz working frequency and a performance density of 0.064 GOPS/DSP using 32-bit quantization, which significantly outperforms previous designs on FPGAs. A real-time application of scene segmentation on Cityscapes Dataset is used to evaluate our CNN accelerator on Zynq-7030 board, and the system achieves a performance of 57.2 GOPS and 0.143 GOPS/DSP using 16-bit quantization, and supports up to 2 frames per second for 512x512 image inputs with a power consumption of only 3.2W.