{"title":"移动平台上CNN模型加速的自动化解决方案","authors":"Yuhao Liu , Yanhua Ma","doi":"10.1016/j.mejo.2025.106691","DOIUrl":null,"url":null,"abstract":"<div><div>This paper presents an FPGA-based convolutional neural network (CNN) accelerator designed to enhance computational efficiency and flexibility for resource-constrained platforms. While FPGAs offer high energy efficiency and adaptability, large-scale CNN deployments face challenges such as computational intensity, diverse kernel sizes, and hardware limitations. To address these issues, we propose an accelerator optimized across four convolution loop dimensions, ensuring efficient resource utilization and streamlined data transmission. Our architecture incorporates three key innovations: (1) Loop-optimized computation framework, which dynamically balances parallelism between inner and outer loops, maximizing data reuse and preventing performance bottlenecks; (2) Customized data layout and memory management, mitigating bandwidth limitations and ensuring high computational efficiency under varying workloads; (3) Automated parameter optimization, integrating reinforcement learning with Python-based search algorithms to explore design configurations, optimizing performance for specific applications. The accelerator is validated on ZCU111 and ZCU102 FPGA platforms using ResNet-50, ResNet-152, and VGG-16. Results show that 69.9% of computations achieve ≥80% efficiency, 47.1% surpass 90%, and 19.2% exceed 95% efficiency, demonstrating superior performance over prior FPGA implementations. Compared to existing designs, our approach achieves a 64.0% increase in efficiency and a 36.5% boost in throughput, while maintaining flexibility across network architectures. These findings highlight the potential of automated optimization techniques in FPGA-based CNN acceleration.</div></div>","PeriodicalId":49818,"journal":{"name":"Microelectronics Journal","volume":"160 ","pages":"Article 106691"},"PeriodicalIF":1.9000,"publicationDate":"2025-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Automated Solutions for CNN Model Acceleration on Mobile Platforms\",\"authors\":\"Yuhao Liu , Yanhua Ma\",\"doi\":\"10.1016/j.mejo.2025.106691\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>This paper presents an FPGA-based convolutional neural network (CNN) accelerator designed to enhance computational efficiency and flexibility for resource-constrained platforms. While FPGAs offer high energy efficiency and adaptability, large-scale CNN deployments face challenges such as computational intensity, diverse kernel sizes, and hardware limitations. To address these issues, we propose an accelerator optimized across four convolution loop dimensions, ensuring efficient resource utilization and streamlined data transmission. Our architecture incorporates three key innovations: (1) Loop-optimized computation framework, which dynamically balances parallelism between inner and outer loops, maximizing data reuse and preventing performance bottlenecks; (2) Customized data layout and memory management, mitigating bandwidth limitations and ensuring high computational efficiency under varying workloads; (3) Automated parameter optimization, integrating reinforcement learning with Python-based search algorithms to explore design configurations, optimizing performance for specific applications. The accelerator is validated on ZCU111 and ZCU102 FPGA platforms using ResNet-50, ResNet-152, and VGG-16. Results show that 69.9% of computations achieve ≥80% efficiency, 47.1% surpass 90%, and 19.2% exceed 95% efficiency, demonstrating superior performance over prior FPGA implementations. Compared to existing designs, our approach achieves a 64.0% increase in efficiency and a 36.5% boost in throughput, while maintaining flexibility across network architectures. These findings highlight the potential of automated optimization techniques in FPGA-based CNN acceleration.</div></div>\",\"PeriodicalId\":49818,\"journal\":{\"name\":\"Microelectronics Journal\",\"volume\":\"160 \",\"pages\":\"Article 106691\"},\"PeriodicalIF\":1.9000,\"publicationDate\":\"2025-04-10\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Microelectronics Journal\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S1879239125001407\",\"RegionNum\":3,\"RegionCategory\":\"工程技术\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"ENGINEERING, ELECTRICAL & ELECTRONIC\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Microelectronics Journal","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1879239125001407","RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
Automated Solutions for CNN Model Acceleration on Mobile Platforms
This paper presents an FPGA-based convolutional neural network (CNN) accelerator designed to enhance computational efficiency and flexibility for resource-constrained platforms. While FPGAs offer high energy efficiency and adaptability, large-scale CNN deployments face challenges such as computational intensity, diverse kernel sizes, and hardware limitations. To address these issues, we propose an accelerator optimized across four convolution loop dimensions, ensuring efficient resource utilization and streamlined data transmission. Our architecture incorporates three key innovations: (1) Loop-optimized computation framework, which dynamically balances parallelism between inner and outer loops, maximizing data reuse and preventing performance bottlenecks; (2) Customized data layout and memory management, mitigating bandwidth limitations and ensuring high computational efficiency under varying workloads; (3) Automated parameter optimization, integrating reinforcement learning with Python-based search algorithms to explore design configurations, optimizing performance for specific applications. The accelerator is validated on ZCU111 and ZCU102 FPGA platforms using ResNet-50, ResNet-152, and VGG-16. Results show that 69.9% of computations achieve ≥80% efficiency, 47.1% surpass 90%, and 19.2% exceed 95% efficiency, demonstrating superior performance over prior FPGA implementations. Compared to existing designs, our approach achieves a 64.0% increase in efficiency and a 36.5% boost in throughput, while maintaining flexibility across network architectures. These findings highlight the potential of automated optimization techniques in FPGA-based CNN acceleration.
期刊介绍:
Published since 1969, the Microelectronics Journal is an international forum for the dissemination of research and applications of microelectronic systems, circuits, and emerging technologies. Papers published in the Microelectronics Journal have undergone peer review to ensure originality, relevance, and timeliness. The journal thus provides a worldwide, regular, and comprehensive update on microelectronic circuits and systems.
The Microelectronics Journal invites papers describing significant research and applications in all of the areas listed below. Comprehensive review/survey papers covering recent developments will also be considered. The Microelectronics Journal covers circuits and systems. This topic includes but is not limited to: Analog, digital, mixed, and RF circuits and related design methodologies; Logic, architectural, and system level synthesis; Testing, design for testability, built-in self-test; Area, power, and thermal analysis and design; Mixed-domain simulation and design; Embedded systems; Non-von Neumann computing and related technologies and circuits; Design and test of high complexity systems integration; SoC, NoC, SIP, and NIP design and test; 3-D integration design and analysis; Emerging device technologies and circuits, such as FinFETs, SETs, spintronics, SFQ, MTJ, etc.
Application aspects such as signal and image processing including circuits for cryptography, sensors, and actuators including sensor networks, reliability and quality issues, and economic models are also welcome.