Qi Hong , Zhiming Liu , Qiang Long , Hao Tong , Tianxu Zhang , Xiaowen Zhu , Yunong Zhao , Hua Ru , Yuxing Zha , Ziyuan Zhou , Jiashun Wu , Hongtao Tan , Weiqiang Hong , Yaohua Xu , Xiaohui Guo
{"title":"A reconfigurable multi-precision quantization-aware nonlinear activation function hardware module for DNNs","authors":"Qi Hong , Zhiming Liu , Qiang Long , Hao Tong , Tianxu Zhang , Xiaowen Zhu , Yunong Zhao , Hua Ru , Yuxing Zha , Ziyuan Zhou , Jiashun Wu , Hongtao Tan , Weiqiang Hong , Yaohua Xu , Xiaohui Guo","doi":"10.1016/j.mejo.2024.106346","DOIUrl":null,"url":null,"abstract":"<div><p>In recent years, the increasing variety of nonlinear activation functions (NAFs) in deep neural networks (DNNs) has led to higher computational demands. However, hardware implementation faces challenges such as lack of flexibility, high hardware cost, and limited accuracy. This paper proposes a highly flexible and low-cost hardware solution for implementing activation functions to overcome these issues. Based on the piecewise linear (PWL) approximation method, our method supports NAFs with different accuracy configurations through a customized implementation strategy to meet the requirements in different scenario applications. In this paper, the symmetry of the activation function is investigated, and incorporate curve translation preprocessing and data quantization to significantly reduce hardware storage costs. The modular hardware architecture proposed in this study supports NAFs of multiple accuracies, which is suitable for designing deep learning neural network accelerators in various scenarios, avoiding the need to design dedicated hardware circuits for the activation function layer and enhances circuit design efficiency. The proposed hardware architecture is validated on the Xilinx XC7Z010 development board. The experimental results show that the average absolute error (AAE) is reduced by about 35.6 % at a clock frequency of 312.5 MHz. Additionally, the accuracy loss of the model is maximized to −0.684 % after replacing the activation layer function of DNNs under the PyTorch framework.</p></div>","PeriodicalId":49818,"journal":{"name":"Microelectronics Journal","volume":null,"pages":null},"PeriodicalIF":1.9000,"publicationDate":"2024-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Microelectronics Journal","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S187923912400050X","RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0
Abstract
In recent years, the increasing variety of nonlinear activation functions (NAFs) in deep neural networks (DNNs) has led to higher computational demands. However, hardware implementation faces challenges such as lack of flexibility, high hardware cost, and limited accuracy. This paper proposes a highly flexible and low-cost hardware solution for implementing activation functions to overcome these issues. Based on the piecewise linear (PWL) approximation method, our method supports NAFs with different accuracy configurations through a customized implementation strategy to meet the requirements in different scenario applications. In this paper, the symmetry of the activation function is investigated, and incorporate curve translation preprocessing and data quantization to significantly reduce hardware storage costs. The modular hardware architecture proposed in this study supports NAFs of multiple accuracies, which is suitable for designing deep learning neural network accelerators in various scenarios, avoiding the need to design dedicated hardware circuits for the activation function layer and enhances circuit design efficiency. The proposed hardware architecture is validated on the Xilinx XC7Z010 development board. The experimental results show that the average absolute error (AAE) is reduced by about 35.6 % at a clock frequency of 312.5 MHz. Additionally, the accuracy loss of the model is maximized to −0.684 % after replacing the activation layer function of DNNs under the PyTorch framework.
期刊介绍:
Published since 1969, the Microelectronics Journal is an international forum for the dissemination of research and applications of microelectronic systems, circuits, and emerging technologies. Papers published in the Microelectronics Journal have undergone peer review to ensure originality, relevance, and timeliness. The journal thus provides a worldwide, regular, and comprehensive update on microelectronic circuits and systems.
The Microelectronics Journal invites papers describing significant research and applications in all of the areas listed below. Comprehensive review/survey papers covering recent developments will also be considered. The Microelectronics Journal covers circuits and systems. This topic includes but is not limited to: Analog, digital, mixed, and RF circuits and related design methodologies; Logic, architectural, and system level synthesis; Testing, design for testability, built-in self-test; Area, power, and thermal analysis and design; Mixed-domain simulation and design; Embedded systems; Non-von Neumann computing and related technologies and circuits; Design and test of high complexity systems integration; SoC, NoC, SIP, and NIP design and test; 3-D integration design and analysis; Emerging device technologies and circuits, such as FinFETs, SETs, spintronics, SFQ, MTJ, etc.
Application aspects such as signal and image processing including circuits for cryptography, sensors, and actuators including sensor networks, reliability and quality issues, and economic models are also welcome.