{"title":"A Survey of Open-source Tools for FPGA-based Inference of Artificial Neural Networks","authors":"M. Lebedev, P. Belecky","doi":"10.1109/ivmem53963.2021.00015","DOIUrl":null,"url":null,"abstract":"During the recent years artificial neural networks have become a great part of everyday life. One of the big problems in AI is acceleration of neural network inference using different hardware: from CPUs and GPUs to FPGAs and ASICs. Many open-source tools have been proposed for this purpose. This article contains a review of a range of open-source tools for neural network optimization, acceleration and hardware synthesis. Tools of three types have been chosen for evaluation: 1) translating neural network models into synthesizable C; 2) accelerating neural network models using custom hardware accelerators; 3) synthesizing Verilog from neural network models. Some of the tools have been tested using five simple neural network examples. Intel CPU, NVIDIA GPU and Cyclone V FPGA hardware platforms have been used for evaluation. Results show that the tested tools can successfully process neural network models and optimize them for CPU and GPU execution, whereas FPGA execution results are controversial.","PeriodicalId":360766,"journal":{"name":"2021 Ivannikov Memorial Workshop (IVMEM)","volume":"56 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 Ivannikov Memorial Workshop (IVMEM)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ivmem53963.2021.00015","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3
Abstract
During the recent years artificial neural networks have become a great part of everyday life. One of the big problems in AI is acceleration of neural network inference using different hardware: from CPUs and GPUs to FPGAs and ASICs. Many open-source tools have been proposed for this purpose. This article contains a review of a range of open-source tools for neural network optimization, acceleration and hardware synthesis. Tools of three types have been chosen for evaluation: 1) translating neural network models into synthesizable C; 2) accelerating neural network models using custom hardware accelerators; 3) synthesizing Verilog from neural network models. Some of the tools have been tested using five simple neural network examples. Intel CPU, NVIDIA GPU and Cyclone V FPGA hardware platforms have been used for evaluation. Results show that the tested tools can successfully process neural network models and optimize them for CPU and GPU execution, whereas FPGA execution results are controversial.
近年来,人工神经网络已成为日常生活的重要组成部分。人工智能的一个大问题是使用不同的硬件加速神经网络推理:从cpu和gpu到fpga和asic。为此,已经提出了许多开源工具。本文回顾了一系列用于神经网络优化、加速和硬件合成的开源工具。选择了三种类型的工具进行评估:1)将神经网络模型转化为可合成的C;2)使用定制硬件加速器加速神经网络模型;3)从神经网络模型合成Verilog。其中一些工具已经用5个简单的神经网络例子进行了测试。采用Intel CPU、NVIDIA GPU和Cyclone V FPGA硬件平台进行评测。结果表明,所测试的工具可以成功地处理神经网络模型,并针对CPU和GPU的执行进行优化,而FPGA的执行结果存在争议。