An Efficient FPGA Accelerator Optimized for High Throughput Sparse CNN Inference

Jiayu Wen, Yufei Ma, Zhongfeng Wang
{"title":"An Efficient FPGA Accelerator Optimized for High Throughput Sparse CNN Inference","authors":"Jiayu Wen, Yufei Ma, Zhongfeng Wang","doi":"10.1109/APCCAS50809.2020.9301696","DOIUrl":null,"url":null,"abstract":"Pruning techniques can compress the CNN models by making the insignificant weights to be zeros to release the tremendous workload in large-scale CNNs. However, for hardware architecture, to efficiently load and operate on the nonzero data with high parallelism is a great challenge due to the random location of pruned weights. To address this issue, a sparsity aware CNN accelerator is proposed in this work to process the irregularly pruned CNN models. A candidate pool architecture is designed to only pick the randomly needed activations chosen by nonzero weights. It is set as a three-dimensional structure to relieve the problem of workload imbalance caused by random nonzero weight locations and high parallelism. Besides, a dedicated indexing method is designed to cooperate with the candidate pool architecture to accomplish the whole sparse dataflow. The proposed sparsity aware CNN accelerator is demonstrated on Intel Arria 10 FPGA for multiple popular CNN models that achieves up to 89.7% throughput improvement compared to the baseline design.","PeriodicalId":127075,"journal":{"name":"2020 IEEE Asia Pacific Conference on Circuits and Systems (APCCAS)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"6","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 IEEE Asia Pacific Conference on Circuits and Systems (APCCAS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/APCCAS50809.2020.9301696","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 6

Abstract

Pruning techniques can compress the CNN models by making the insignificant weights to be zeros to release the tremendous workload in large-scale CNNs. However, for hardware architecture, to efficiently load and operate on the nonzero data with high parallelism is a great challenge due to the random location of pruned weights. To address this issue, a sparsity aware CNN accelerator is proposed in this work to process the irregularly pruned CNN models. A candidate pool architecture is designed to only pick the randomly needed activations chosen by nonzero weights. It is set as a three-dimensional structure to relieve the problem of workload imbalance caused by random nonzero weight locations and high parallelism. Besides, a dedicated indexing method is designed to cooperate with the candidate pool architecture to accomplish the whole sparse dataflow. The proposed sparsity aware CNN accelerator is demonstrated on Intel Arria 10 FPGA for multiple popular CNN models that achieves up to 89.7% throughput improvement compared to the baseline design.
针对高吞吐量稀疏CNN推理优化的高效FPGA加速器
修剪技术可以通过将不重要的权值变为零来压缩CNN模型,从而释放大规模CNN的巨大工作量。然而,对于硬件架构来说,由于剪枝权值的随机位置,如何高效地加载和操作高并行性的非零数据是一个很大的挑战。为了解决这个问题,本文提出了一个稀疏感知CNN加速器来处理不规则修剪的CNN模型。候选池架构被设计为只选择由非零权重选择的随机需要的激活。它被设置为一个三维结构,以缓解随机的非零权重位置和高并行性带来的工作量不平衡问题。此外,设计了一种专用的索引方法,配合候选池架构完成整个稀疏数据流。提出的稀疏感知CNN加速器在Intel Arria 10 FPGA上针对多种流行的CNN模型进行了演示,与基线设计相比,实现了高达89.7%的吞吐量提升。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信