Optimization strategies for neural network deployment on FPGA: An energy-efficient real-time face detection use case

IF 6 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS
Mhd Rashed Al Koutayni, Gerd Reis, Didier Stricker
{"title":"Optimization strategies for neural network deployment on FPGA: An energy-efficient real-time face detection use case","authors":"Mhd Rashed Al Koutayni,&nbsp;Gerd Reis,&nbsp;Didier Stricker","doi":"10.1016/j.iot.2025.101676","DOIUrl":null,"url":null,"abstract":"<div><div>Field programmable gate arrays (FPGAs) are considered promising platforms for accelerating deep neural networks (DNNs) due to their parallel processing capabilities and energy efficiency. However, Deploying DNNs on FPGA platforms for computer vision tasks presents unique challenges, such as limited computational resources, constrained power budgets, and the need for real-time performance. This work presents a set of optimization methodologies to enhance the efficiency of real-time DNN inference on FPGA system-on-a-chip (SoC) platforms. These optimizations include architectural modifications, fixed-point quantization, computation reordering, and parallelization. Additionally, hardware/software partitioning is employed to optimize task allocation between the processing system (PS) and programmable logic (PL), along with system integration and interface configuration. To validate these strategies, we apply them to a baseline face detection DNN (FaceBoxes) as a use case. The proposed techniques not only improve the efficiency of FaceBoxes on FPGA but also provide a roadmap for optimizing other DNN-based applications for resource-constrained platforms. Experimental results on the AMD Xilinx ZCU102 board with VGA resolution (<span><math><mrow><mn>480</mn><mo>×</mo><mn>640</mn><mo>×</mo><mn>3</mn></mrow></math></span>) input demonstrate a significant increase in efficiency, achieving real-time performance while substantially reducing dynamic energy consumption.</div></div>","PeriodicalId":29968,"journal":{"name":"Internet of Things","volume":"33 ","pages":"Article 101676"},"PeriodicalIF":6.0000,"publicationDate":"2025-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Internet of Things","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2542660525001908","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0

Abstract

Field programmable gate arrays (FPGAs) are considered promising platforms for accelerating deep neural networks (DNNs) due to their parallel processing capabilities and energy efficiency. However, Deploying DNNs on FPGA platforms for computer vision tasks presents unique challenges, such as limited computational resources, constrained power budgets, and the need for real-time performance. This work presents a set of optimization methodologies to enhance the efficiency of real-time DNN inference on FPGA system-on-a-chip (SoC) platforms. These optimizations include architectural modifications, fixed-point quantization, computation reordering, and parallelization. Additionally, hardware/software partitioning is employed to optimize task allocation between the processing system (PS) and programmable logic (PL), along with system integration and interface configuration. To validate these strategies, we apply them to a baseline face detection DNN (FaceBoxes) as a use case. The proposed techniques not only improve the efficiency of FaceBoxes on FPGA but also provide a roadmap for optimizing other DNN-based applications for resource-constrained platforms. Experimental results on the AMD Xilinx ZCU102 board with VGA resolution (480×640×3) input demonstrate a significant increase in efficiency, achieving real-time performance while substantially reducing dynamic energy consumption.
FPGA上神经网络部署的优化策略:一种节能的实时人脸检测用例
现场可编程门阵列(fpga)由于其并行处理能力和能源效率被认为是加速深度神经网络(dnn)的有前途的平台。然而,在FPGA平台上部署dnn用于计算机视觉任务面临着独特的挑战,例如有限的计算资源,有限的功率预算以及对实时性能的需求。本研究提出了一套优化方法,以提高FPGA片上系统(SoC)平台上实时DNN推理的效率。这些优化包括架构修改、定点量化、计算重排序和并行化。此外,还使用硬件/软件分区来优化处理系统(PS)和可编程逻辑(PL)之间的任务分配,以及系统集成和接口配置。为了验证这些策略,我们将它们应用于基线人脸检测DNN (FaceBoxes)作为用例。所提出的技术不仅提高了FPGA上facebox的效率,而且为优化资源受限平台上其他基于dnn的应用提供了路线图。在VGA分辨率(480×640×3)输入的AMD Xilinx ZCU102板上的实验结果表明,效率显著提高,在实现实时性能的同时大幅降低了动态能耗。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Internet of Things
Internet of Things Multiple-
CiteScore
3.60
自引率
5.10%
发文量
115
审稿时长
37 days
期刊介绍: Internet of Things; Engineering Cyber Physical Human Systems is a comprehensive journal encouraging cross collaboration between researchers, engineers and practitioners in the field of IoT & Cyber Physical Human Systems. The journal offers a unique platform to exchange scientific information on the entire breadth of technology, science, and societal applications of the IoT. The journal will place a high priority on timely publication, and provide a home for high quality. Furthermore, IOT is interested in publishing topical Special Issues on any aspect of IOT.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信