Accelerating Deep Neural Networks Using FPGAs and ZYNQ

H. Lee, Jae Wook Jeon
{"title":"Accelerating Deep Neural Networks Using FPGAs and ZYNQ","authors":"H. Lee, Jae Wook Jeon","doi":"10.1109/TENSYMP52854.2021.9550853","DOIUrl":null,"url":null,"abstract":"This article aims at implementing a Deep Neural Network (DNN) using Field Programmable Gate Arrays (FPGAs) for real time deep learning inference in embedded systems. In now days DNNs are widely used where high accuracy is required. However, due to the structural complexity, deep learning models are highly computationally intensive. To improve the system performance, optimization techniques such as weight quantization and pruning are commonly adopted. Another approach to improve the system performance is by applying heterogeneous architectures. Processor with Graphics Processing Unit (GPU) architectures are commonly used for deep learning training and inference acceleration. However, GPUs are expensive and consume much power that not a perfect solution for embedded systems. In this paper, we implemented a deep neural network on a Zynq SoC which is a heterogenous system integrated of ARM processor and FPGA. We trained the model with MNIST database, quantized the model’s 32-bit floating point weights and bias into integer and implemented model to inference in FPGA. As a result, we deployed a network on an embedded system while maintaining inference accuracy and accelerated the system performance with using less resources.","PeriodicalId":137485,"journal":{"name":"2021 IEEE Region 10 Symposium (TENSYMP)","volume":"68 9","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE Region 10 Symposium (TENSYMP)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/TENSYMP52854.2021.9550853","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2

Abstract

This article aims at implementing a Deep Neural Network (DNN) using Field Programmable Gate Arrays (FPGAs) for real time deep learning inference in embedded systems. In now days DNNs are widely used where high accuracy is required. However, due to the structural complexity, deep learning models are highly computationally intensive. To improve the system performance, optimization techniques such as weight quantization and pruning are commonly adopted. Another approach to improve the system performance is by applying heterogeneous architectures. Processor with Graphics Processing Unit (GPU) architectures are commonly used for deep learning training and inference acceleration. However, GPUs are expensive and consume much power that not a perfect solution for embedded systems. In this paper, we implemented a deep neural network on a Zynq SoC which is a heterogenous system integrated of ARM processor and FPGA. We trained the model with MNIST database, quantized the model’s 32-bit floating point weights and bias into integer and implemented model to inference in FPGA. As a result, we deployed a network on an embedded system while maintaining inference accuracy and accelerated the system performance with using less resources.
利用fpga和ZYNQ加速深度神经网络
本文旨在利用现场可编程门阵列(fpga)实现嵌入式系统中实时深度学习推理的深度神经网络(DNN)。目前,深度神经网络被广泛应用于对精度要求较高的领域。然而,由于结构的复杂性,深度学习模型是高度计算密集型的。为了提高系统性能,通常采用权值量化和剪枝等优化技术。提高系统性能的另一种方法是应用异构体系结构。具有图形处理单元(GPU)架构的处理器通常用于深度学习训练和推理加速。然而,gpu价格昂贵且消耗大量能量,这不是嵌入式系统的完美解决方案。本文在Zynq SoC上实现了一种深度神经网络,该SoC是ARM处理器和FPGA集成的异构系统。利用MNIST数据库对模型进行训练,将模型的32位浮点权值和偏置量化为整数,并在FPGA上实现模型推理。因此,我们在嵌入式系统上部署了一个网络,同时保持了推理的准确性,并以更少的资源加速了系统的性能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信