基于FPGA的深度学习模型快速稀疏加速器设计

Shaotong Li, Yuhang Long
{"title":"基于FPGA的深度学习模型快速稀疏加速器设计","authors":"Shaotong Li, Yuhang Long","doi":"10.1117/12.2680554","DOIUrl":null,"url":null,"abstract":"At present, there have been many studies to design various CNN hardware accelerators to accelerate the inference of deep neural network models. The FPGA-based CNN reasoning accelerator can provide sufficient computing power support with flexible data accuracy, lower energy consumption and lower application cost, and has received a lot of attention in the application field of IoT terminal devices with limited computing power and energy consumption. Widespread concern. However, although the current FPGA-based CNN accelerator has greatly improved the speed of model reasoning through various methods, most of the methods cannot be effectively applied to actual terminal scenarios due to limitations in memory and energy consumption. In response to this situation, we designed an acceleration framework that takes into account both inference acceleration and energy consumption. Aiming at the limitation of computing power in the terminal environment, optimize a large number of multiplication operations in the convolution operation that consumes the most computing power in the CNN inference stage, by using local cache and matrix transformation formulas, and skipping pairings by zero values in the calculation process the model inference operation is further accelerated while reducing energy consumption. The experimental results show that compared with the current advanced neural network accelerator, not only the computing power has been significantly improved, but also the energy efficiency ratio has achieved better results. Moreover, this method can not only be implemented in FPGA, but also be migrated to other embedded terminals.","PeriodicalId":201466,"journal":{"name":"Symposium on Advances in Electrical, Electronics and Computer Engineering","volume":"23 16","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Design of fast and sparse accelerator for deep learning model based on FPGA\",\"authors\":\"Shaotong Li, Yuhang Long\",\"doi\":\"10.1117/12.2680554\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"At present, there have been many studies to design various CNN hardware accelerators to accelerate the inference of deep neural network models. The FPGA-based CNN reasoning accelerator can provide sufficient computing power support with flexible data accuracy, lower energy consumption and lower application cost, and has received a lot of attention in the application field of IoT terminal devices with limited computing power and energy consumption. Widespread concern. However, although the current FPGA-based CNN accelerator has greatly improved the speed of model reasoning through various methods, most of the methods cannot be effectively applied to actual terminal scenarios due to limitations in memory and energy consumption. In response to this situation, we designed an acceleration framework that takes into account both inference acceleration and energy consumption. Aiming at the limitation of computing power in the terminal environment, optimize a large number of multiplication operations in the convolution operation that consumes the most computing power in the CNN inference stage, by using local cache and matrix transformation formulas, and skipping pairings by zero values in the calculation process the model inference operation is further accelerated while reducing energy consumption. The experimental results show that compared with the current advanced neural network accelerator, not only the computing power has been significantly improved, but also the energy efficiency ratio has achieved better results. Moreover, this method can not only be implemented in FPGA, but also be migrated to other embedded terminals.\",\"PeriodicalId\":201466,\"journal\":{\"name\":\"Symposium on Advances in Electrical, Electronics and Computer Engineering\",\"volume\":\"23 16\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-05-31\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Symposium on Advances in Electrical, Electronics and Computer Engineering\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1117/12.2680554\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Symposium on Advances in Electrical, Electronics and Computer Engineering","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1117/12.2680554","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

目前,已经有很多研究设计各种CNN硬件加速器来加速深度神经网络模型的推理。基于fpga的CNN推理加速器能够以灵活的数据精度、更低的能耗和更低的应用成本提供足够的计算能力支持,在计算能力和能耗有限的物联网终端设备应用领域受到了广泛关注。广泛的关注。然而,尽管目前基于fpga的CNN加速器通过各种方法大大提高了模型推理的速度,但由于内存和能耗的限制,大多数方法无法有效应用于实际的终端场景。针对这种情况,我们设计了一个同时考虑推理加速和能量消耗的加速框架。针对终端环境中计算能力的限制,优化CNN推理阶段中消耗计算能力最大的卷积运算中的大量乘法运算,利用局部缓存和矩阵变换公式,在计算过程中对配对进行零值跳过,进一步加速模型推理运算,同时降低能耗。实验结果表明,与目前先进的神经网络加速器相比,不仅计算能力有了明显提高,而且能效比也取得了更好的效果。而且,该方法不仅可以在FPGA上实现,而且可以移植到其他嵌入式终端上。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Design of fast and sparse accelerator for deep learning model based on FPGA
At present, there have been many studies to design various CNN hardware accelerators to accelerate the inference of deep neural network models. The FPGA-based CNN reasoning accelerator can provide sufficient computing power support with flexible data accuracy, lower energy consumption and lower application cost, and has received a lot of attention in the application field of IoT terminal devices with limited computing power and energy consumption. Widespread concern. However, although the current FPGA-based CNN accelerator has greatly improved the speed of model reasoning through various methods, most of the methods cannot be effectively applied to actual terminal scenarios due to limitations in memory and energy consumption. In response to this situation, we designed an acceleration framework that takes into account both inference acceleration and energy consumption. Aiming at the limitation of computing power in the terminal environment, optimize a large number of multiplication operations in the convolution operation that consumes the most computing power in the CNN inference stage, by using local cache and matrix transformation formulas, and skipping pairings by zero values in the calculation process the model inference operation is further accelerated while reducing energy consumption. The experimental results show that compared with the current advanced neural network accelerator, not only the computing power has been significantly improved, but also the energy efficiency ratio has achieved better results. Moreover, this method can not only be implemented in FPGA, but also be migrated to other embedded terminals.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信