机器学习辅助FPGA DNN实现的硬件资源估计

D. Diaconu, L. Petrica, Michaela Blott, M. Leeser
{"title":"机器学习辅助FPGA DNN实现的硬件资源估计","authors":"D. Diaconu, L. Petrica, Michaela Blott, M. Leeser","doi":"10.1109/IPDPSW55747.2022.00022","DOIUrl":null,"url":null,"abstract":"This paper explores methods of improving hardware resource estimation for the implementation of Deep Neural Networks(DNN) on FPGAs using machine learning algorithms. Current approaches consider the DNN and High Level Synthesis (HLS) levels. At the DNN level, most techniques are strictly analytical, and based on rough approximations and FPGA DNN implementation assumptions. The aim of this work is to facilitate design space exploration by providing more accurate resource estimates before running time consuming processes such as High Level Synthesis (HLS) or logic synthesis. We integrated the algorithms in FINN, an end-to-end framework for building Quantized Neural Networks (QNN) FPGA inference accelerators, in order to evaluate and compare them to existing estimation as well as the actual synthesized design. We generated Support Vector Regression (SVR) models for LUT and BRAM estimation, the former yields promising results, while the latter consistently underperforms in comparison to HLS and analytical FINN estimates. Combining the analytical approach used in FINN with SVR LUT estimation provided more accurate results because on its own, SVR had insufficient extrapolation capability. For BRAM estimation, we improved the analytical approach by using a Decision Tree Classifier for predicting distributed or BRAM memory implementation.","PeriodicalId":286968,"journal":{"name":"2022 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Machine Learning Aided Hardware Resource Estimation for FPGA DNN Implementations\",\"authors\":\"D. Diaconu, L. Petrica, Michaela Blott, M. Leeser\",\"doi\":\"10.1109/IPDPSW55747.2022.00022\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"This paper explores methods of improving hardware resource estimation for the implementation of Deep Neural Networks(DNN) on FPGAs using machine learning algorithms. Current approaches consider the DNN and High Level Synthesis (HLS) levels. At the DNN level, most techniques are strictly analytical, and based on rough approximations and FPGA DNN implementation assumptions. The aim of this work is to facilitate design space exploration by providing more accurate resource estimates before running time consuming processes such as High Level Synthesis (HLS) or logic synthesis. We integrated the algorithms in FINN, an end-to-end framework for building Quantized Neural Networks (QNN) FPGA inference accelerators, in order to evaluate and compare them to existing estimation as well as the actual synthesized design. We generated Support Vector Regression (SVR) models for LUT and BRAM estimation, the former yields promising results, while the latter consistently underperforms in comparison to HLS and analytical FINN estimates. Combining the analytical approach used in FINN with SVR LUT estimation provided more accurate results because on its own, SVR had insufficient extrapolation capability. For BRAM estimation, we improved the analytical approach by using a Decision Tree Classifier for predicting distributed or BRAM memory implementation.\",\"PeriodicalId\":286968,\"journal\":{\"name\":\"2022 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW)\",\"volume\":\"12 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-05-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/IPDPSW55747.2022.00022\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IPDPSW55747.2022.00022","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

本文探讨了使用机器学习算法改进fpga上实现深度神经网络(DNN)的硬件资源估计的方法。目前的方法考虑DNN和高水平合成(HLS)水平。在深度神经网络层面,大多数技术都是严格的分析性的,并且基于粗略的近似和FPGA深度神经网络实现假设。这项工作的目的是通过在运行耗时的过程(如高级综合(High Level Synthesis, HLS)或逻辑综合)之前提供更准确的资源估计来促进设计空间探索。我们将这些算法集成到FINN中,FINN是一个用于构建量化神经网络(QNN) FPGA推理加速器的端到端框架,以便与现有的估计和实际的综合设计进行评估和比较。我们为LUT和BRAM估计生成了支持向量回归(SVR)模型,前者产生了有希望的结果,而后者与HLS和分析性FINN估计相比一直表现不佳。FINN中使用的解析方法与SVR LUT估计相结合,由于SVR本身外推能力不足,结果更加准确。对于BRAM估计,我们改进了分析方法,使用决策树分类器来预测分布式或BRAM内存实现。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Machine Learning Aided Hardware Resource Estimation for FPGA DNN Implementations
This paper explores methods of improving hardware resource estimation for the implementation of Deep Neural Networks(DNN) on FPGAs using machine learning algorithms. Current approaches consider the DNN and High Level Synthesis (HLS) levels. At the DNN level, most techniques are strictly analytical, and based on rough approximations and FPGA DNN implementation assumptions. The aim of this work is to facilitate design space exploration by providing more accurate resource estimates before running time consuming processes such as High Level Synthesis (HLS) or logic synthesis. We integrated the algorithms in FINN, an end-to-end framework for building Quantized Neural Networks (QNN) FPGA inference accelerators, in order to evaluate and compare them to existing estimation as well as the actual synthesized design. We generated Support Vector Regression (SVR) models for LUT and BRAM estimation, the former yields promising results, while the latter consistently underperforms in comparison to HLS and analytical FINN estimates. Combining the analytical approach used in FINN with SVR LUT estimation provided more accurate results because on its own, SVR had insufficient extrapolation capability. For BRAM estimation, we improved the analytical approach by using a Decision Tree Classifier for predicting distributed or BRAM memory implementation.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信