Dezheng Zhang , Rui Cen , Han Pu , Rui Wan , Dong Wang
{"title":"An FPGA-based binary neural network accelerator with enhanced hardware efficiency and data reuse","authors":"Dezheng Zhang , Rui Cen , Han Pu , Rui Wan , Dong Wang","doi":"10.1016/j.mejo.2025.106556","DOIUrl":null,"url":null,"abstract":"<div><div>Binary neural network (BNN) algorithms have gained significant attention due to their low computational complexity and improved accuracy. Field-programmable gate array (FPGA)-based accelerators for BNNs typically transform binary operations into XNOR and popcount operations, efficiently utilizing logic resources. However, fixed-point operations in the first and final layers result in high hardware costs. Moreover, frequent off-chip memory access, particularly for shortcut operations, can severely limit throughput.</div><div>This paper introduces a hardware-efficient FPGA-based BNN accelerator with minimized off-chip memory access. The first layer’s inputs are binarized using an enhanced thermometer encoding scheme, enabling all convolution layers to be executed on a unified binary computing unit. A multi-mode computing unit is designed to perform both ReAct parametric rectified linear unit (RPReLU) and linear operations using shared hardware multipliers, which maximizes hardware reuse. To address bandwidth limitations, we implement a dynamic data buffering architecture, featuring high-bandwidth and high-volume buffers for caching intra-layer and cross-layer data, respectively. The accelerator supports two data buffering schemes, and the one minimizing the off-chip memory access is selected for each convolution layer. The accelerator is evaluated on an Alinx AXU3EGB development board and a custom development board with a Xilinx XC7V690T device, respectively. Experiments on ImageNet demonstrate 69.34<!--> <!-->% accuracy, with frames per second (FPS) of 129.1 and 405.6 on the ZU3EG and XC7V690T devices, respectively. Compared to the state-of-the-art design, our accelerator achieves 2.7<span><math><mo>×</mo></math></span> FPS on the same FPGA device.</div></div>","PeriodicalId":49818,"journal":{"name":"Microelectronics Journal","volume":"156 ","pages":"Article 106556"},"PeriodicalIF":1.9000,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Microelectronics Journal","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1879239125000050","RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0
Abstract
Binary neural network (BNN) algorithms have gained significant attention due to their low computational complexity and improved accuracy. Field-programmable gate array (FPGA)-based accelerators for BNNs typically transform binary operations into XNOR and popcount operations, efficiently utilizing logic resources. However, fixed-point operations in the first and final layers result in high hardware costs. Moreover, frequent off-chip memory access, particularly for shortcut operations, can severely limit throughput.
This paper introduces a hardware-efficient FPGA-based BNN accelerator with minimized off-chip memory access. The first layer’s inputs are binarized using an enhanced thermometer encoding scheme, enabling all convolution layers to be executed on a unified binary computing unit. A multi-mode computing unit is designed to perform both ReAct parametric rectified linear unit (RPReLU) and linear operations using shared hardware multipliers, which maximizes hardware reuse. To address bandwidth limitations, we implement a dynamic data buffering architecture, featuring high-bandwidth and high-volume buffers for caching intra-layer and cross-layer data, respectively. The accelerator supports two data buffering schemes, and the one minimizing the off-chip memory access is selected for each convolution layer. The accelerator is evaluated on an Alinx AXU3EGB development board and a custom development board with a Xilinx XC7V690T device, respectively. Experiments on ImageNet demonstrate 69.34 % accuracy, with frames per second (FPS) of 129.1 and 405.6 on the ZU3EG and XC7V690T devices, respectively. Compared to the state-of-the-art design, our accelerator achieves 2.7 FPS on the same FPGA device.
期刊介绍:
Published since 1969, the Microelectronics Journal is an international forum for the dissemination of research and applications of microelectronic systems, circuits, and emerging technologies. Papers published in the Microelectronics Journal have undergone peer review to ensure originality, relevance, and timeliness. The journal thus provides a worldwide, regular, and comprehensive update on microelectronic circuits and systems.
The Microelectronics Journal invites papers describing significant research and applications in all of the areas listed below. Comprehensive review/survey papers covering recent developments will also be considered. The Microelectronics Journal covers circuits and systems. This topic includes but is not limited to: Analog, digital, mixed, and RF circuits and related design methodologies; Logic, architectural, and system level synthesis; Testing, design for testability, built-in self-test; Area, power, and thermal analysis and design; Mixed-domain simulation and design; Embedded systems; Non-von Neumann computing and related technologies and circuits; Design and test of high complexity systems integration; SoC, NoC, SIP, and NIP design and test; 3-D integration design and analysis; Emerging device technologies and circuits, such as FinFETs, SETs, spintronics, SFQ, MTJ, etc.
Application aspects such as signal and image processing including circuits for cryptography, sensors, and actuators including sensor networks, reliability and quality issues, and economic models are also welcome.