轻量级FPGA加速框架结构定制多版本MobileNetV1

IF 2.2 3区 工程技术 Q3 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE
XuMing Lu, JiaWei Zhang, LuoJie Zhu, XianYang Tan
{"title":"轻量级FPGA加速框架结构定制多版本MobileNetV1","authors":"XuMing Lu,&nbsp;JiaWei Zhang,&nbsp;LuoJie Zhu,&nbsp;XianYang Tan","doi":"10.1016/j.vlsi.2025.102383","DOIUrl":null,"url":null,"abstract":"<div><div>Convolutional neural networks (CNNs) have significantly enhanced image recognition performance through effective feature extraction and weight sharing, establishing themselves as a pivotal research area in computer vision. Despite these advances, CNNs demand substantial computational resources, posing challenges for deployment on resource-constrained embedded devices. Consequently, lightweight CNN models, such as MobileNet, have been developed to optimize computational efficiency. However, these models still necessitate accelerators to achieve optimal performance. Field-programmable gate arrays (FPGAs) present a viable solution for accelerating lightweight CNN models, thanks to their inherent capabilities for high parallelism, superior energy efficiency compared to traditional CPUs or GPUs, and reconfigurability, which adapts well to evolving network architectures. Nevertheless, compact FPGAs are limited by their on-chip logic resources. This limitation, coupled with the requirement to support multiple pruned versions of MobileNet networks due to advancements in model structure pruning, complicates the FPGA design process and escalates the resource allocation and associated costs. To address this issue, we propose a master-slave architecture for the MobileNetV1 computing framework, where the master module manages task scheduling and resource allocation, while slave modules execute the actual convolutional computations. This framework employs a dynamic configuration method, programming execution parameters for each network layer into the FPGA, allowing adaptability and optimization of resource usage. The proposed design was validated on the Altera De2-115 FPGA evaluation board using the MobileNet-V1-0.5-160 model. Experimental results demonstrated that, when implemented on the Altera De2-115 FPGA board, the recognition speed of our optimized MobileNetV1 model could reach 68.9 frames per second (FPS) with an 8-bit data width and a clock speed of 25 MHz, utilizing only 38K logic units—an efficient performance benchmark compared to previous FPGA implementations.</div></div>","PeriodicalId":54973,"journal":{"name":"Integration-The Vlsi Journal","volume":"103 ","pages":"Article 102383"},"PeriodicalIF":2.2000,"publicationDate":"2025-02-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Lightweight FPGA acceleration framework for structurally tailored multi-version MobileNetV1\",\"authors\":\"XuMing Lu,&nbsp;JiaWei Zhang,&nbsp;LuoJie Zhu,&nbsp;XianYang Tan\",\"doi\":\"10.1016/j.vlsi.2025.102383\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Convolutional neural networks (CNNs) have significantly enhanced image recognition performance through effective feature extraction and weight sharing, establishing themselves as a pivotal research area in computer vision. Despite these advances, CNNs demand substantial computational resources, posing challenges for deployment on resource-constrained embedded devices. Consequently, lightweight CNN models, such as MobileNet, have been developed to optimize computational efficiency. However, these models still necessitate accelerators to achieve optimal performance. Field-programmable gate arrays (FPGAs) present a viable solution for accelerating lightweight CNN models, thanks to their inherent capabilities for high parallelism, superior energy efficiency compared to traditional CPUs or GPUs, and reconfigurability, which adapts well to evolving network architectures. Nevertheless, compact FPGAs are limited by their on-chip logic resources. This limitation, coupled with the requirement to support multiple pruned versions of MobileNet networks due to advancements in model structure pruning, complicates the FPGA design process and escalates the resource allocation and associated costs. To address this issue, we propose a master-slave architecture for the MobileNetV1 computing framework, where the master module manages task scheduling and resource allocation, while slave modules execute the actual convolutional computations. This framework employs a dynamic configuration method, programming execution parameters for each network layer into the FPGA, allowing adaptability and optimization of resource usage. The proposed design was validated on the Altera De2-115 FPGA evaluation board using the MobileNet-V1-0.5-160 model. Experimental results demonstrated that, when implemented on the Altera De2-115 FPGA board, the recognition speed of our optimized MobileNetV1 model could reach 68.9 frames per second (FPS) with an 8-bit data width and a clock speed of 25 MHz, utilizing only 38K logic units—an efficient performance benchmark compared to previous FPGA implementations.</div></div>\",\"PeriodicalId\":54973,\"journal\":{\"name\":\"Integration-The Vlsi Journal\",\"volume\":\"103 \",\"pages\":\"Article 102383\"},\"PeriodicalIF\":2.2000,\"publicationDate\":\"2025-02-20\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Integration-The Vlsi Journal\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0167926025000409\",\"RegionNum\":3,\"RegionCategory\":\"工程技术\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Integration-The Vlsi Journal","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0167926025000409","RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE","Score":null,"Total":0}
引用次数: 0

摘要

卷积神经网络(cnn)通过有效的特征提取和权值共享,显著提高了图像识别性能,成为计算机视觉领域的一个关键研究领域。尽管取得了这些进步,cnn仍需要大量的计算资源,这对资源受限的嵌入式设备的部署提出了挑战。因此,开发了轻量级CNN模型,如MobileNet,以优化计算效率。然而,这些模型仍然需要加速器来实现最佳性能。现场可编程门阵列(fpga)为加速轻量级CNN模型提供了一种可行的解决方案,这得益于其固有的高并行性、与传统cpu或gpu相比优越的能效以及可重构性,能够很好地适应不断发展的网络架构。然而,紧凑型fpga受到片上逻辑资源的限制。这种限制,再加上由于模型结构修剪的进步,需要支持多个修剪版本的MobileNet网络,使FPGA设计过程变得复杂,并增加了资源分配和相关成本。为了解决这个问题,我们为MobileNetV1计算框架提出了一个主从架构,其中主模块管理任务调度和资源分配,而从模块执行实际的卷积计算。该框架采用动态配置方法,将各网络层的执行参数编程到FPGA中,实现了对资源使用的适应性和优化。采用MobileNet-V1-0.5-160模型在Altera De2-115 FPGA评估板上进行了验证。实验结果表明,当在Altera De2-115 FPGA板上实现时,我们优化的MobileNetV1模型的识别速度可以达到每秒68.9帧(FPS),数据宽度为8位,时钟速度为25 MHz,仅使用38K逻辑单元,与以前的FPGA实现相比,这是一个高效的性能基准。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Lightweight FPGA acceleration framework for structurally tailored multi-version MobileNetV1
Convolutional neural networks (CNNs) have significantly enhanced image recognition performance through effective feature extraction and weight sharing, establishing themselves as a pivotal research area in computer vision. Despite these advances, CNNs demand substantial computational resources, posing challenges for deployment on resource-constrained embedded devices. Consequently, lightweight CNN models, such as MobileNet, have been developed to optimize computational efficiency. However, these models still necessitate accelerators to achieve optimal performance. Field-programmable gate arrays (FPGAs) present a viable solution for accelerating lightweight CNN models, thanks to their inherent capabilities for high parallelism, superior energy efficiency compared to traditional CPUs or GPUs, and reconfigurability, which adapts well to evolving network architectures. Nevertheless, compact FPGAs are limited by their on-chip logic resources. This limitation, coupled with the requirement to support multiple pruned versions of MobileNet networks due to advancements in model structure pruning, complicates the FPGA design process and escalates the resource allocation and associated costs. To address this issue, we propose a master-slave architecture for the MobileNetV1 computing framework, where the master module manages task scheduling and resource allocation, while slave modules execute the actual convolutional computations. This framework employs a dynamic configuration method, programming execution parameters for each network layer into the FPGA, allowing adaptability and optimization of resource usage. The proposed design was validated on the Altera De2-115 FPGA evaluation board using the MobileNet-V1-0.5-160 model. Experimental results demonstrated that, when implemented on the Altera De2-115 FPGA board, the recognition speed of our optimized MobileNetV1 model could reach 68.9 frames per second (FPS) with an 8-bit data width and a clock speed of 25 MHz, utilizing only 38K logic units—an efficient performance benchmark compared to previous FPGA implementations.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Integration-The Vlsi Journal
Integration-The Vlsi Journal 工程技术-工程:电子与电气
CiteScore
3.80
自引率
5.30%
发文量
107
审稿时长
6 months
期刊介绍: Integration''s aim is to cover every aspect of the VLSI area, with an emphasis on cross-fertilization between various fields of science, and the design, verification, test and applications of integrated circuits and systems, as well as closely related topics in process and device technologies. Individual issues will feature peer-reviewed tutorials and articles as well as reviews of recent publications. The intended coverage of the journal can be assessed by examining the following (non-exclusive) list of topics: Specification methods and languages; Analog/Digital Integrated Circuits and Systems; VLSI architectures; Algorithms, methods and tools for modeling, simulation, synthesis and verification of integrated circuits and systems of any complexity; Embedded systems; High-level synthesis for VLSI systems; Logic synthesis and finite automata; Testing, design-for-test and test generation algorithms; Physical design; Formal verification; Algorithms implemented in VLSI systems; Systems engineering; Heterogeneous systems.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信