Pan Zhao;Donghui Xue;Licheng Wu;Liang Chang;Haining Tan;Yinhe Han;Jun Zhou
{"title":"HEAT: Efficient Vision Transformer Accelerator With Hybrid-Precision Quantization","authors":"Pan Zhao;Donghui Xue;Licheng Wu;Liang Chang;Haining Tan;Yinhe Han;Jun Zhou","doi":"10.1109/TCSII.2025.3547340","DOIUrl":null,"url":null,"abstract":"Quantization is an important technique for the acceleration of transformer-based neural networks. Prior related works mainly consider quantization from the algorithm level. Their hardware implementation is inefficient. In this brief, we propose an efficient vision transformer accelerator with retraining-free and finetuning-free hybrid-precision quantization. At the algorithm level, the features and weights are divided into two parts: normal values and outlier values. These two parts are quantized with different bit widths and scaling factors. We use matrix transformation and group-wise quantization policy to improve hardware utilization. At the hardware level, we propose a two-stage FIFO group structure and a hierarchical interleaving data flow to further improve the utilization of the PE array. As a result, the input and weight matrices are quantized to 5.71 bits on average with 0.526 <inline-formula> <tex-math>${\\%}$ </tex-math></inline-formula> accuracy loss on Swin-T. The accelerator achieves a frame rate of 118.9 FPS and an energy efficiency of 43.58 GOPS/W on the ZCU102 FPGA board, better than state-of-the-art works.","PeriodicalId":13101,"journal":{"name":"IEEE Transactions on Circuits and Systems II: Express Briefs","volume":"72 5","pages":"758-762"},"PeriodicalIF":4.0000,"publicationDate":"2025-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Circuits and Systems II: Express Briefs","FirstCategoryId":"5","ListUrlMain":"https://ieeexplore.ieee.org/document/10909325/","RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0
Abstract
Quantization is an important technique for the acceleration of transformer-based neural networks. Prior related works mainly consider quantization from the algorithm level. Their hardware implementation is inefficient. In this brief, we propose an efficient vision transformer accelerator with retraining-free and finetuning-free hybrid-precision quantization. At the algorithm level, the features and weights are divided into two parts: normal values and outlier values. These two parts are quantized with different bit widths and scaling factors. We use matrix transformation and group-wise quantization policy to improve hardware utilization. At the hardware level, we propose a two-stage FIFO group structure and a hierarchical interleaving data flow to further improve the utilization of the PE array. As a result, the input and weight matrices are quantized to 5.71 bits on average with 0.526 ${\%}$ accuracy loss on Swin-T. The accelerator achieves a frame rate of 118.9 FPS and an energy efficiency of 43.58 GOPS/W on the ZCU102 FPGA board, better than state-of-the-art works.
期刊介绍:
TCAS II publishes brief papers in the field specified by the theory, analysis, design, and practical implementations of circuits, and the application of circuit techniques to systems and to signal processing. Included is the whole spectrum from basic scientific theory to industrial applications. The field of interest covered includes:
Circuits: Analog, Digital and Mixed Signal Circuits and Systems
Nonlinear Circuits and Systems, Integrated Sensors, MEMS and Systems on Chip, Nanoscale Circuits and Systems, Optoelectronic
Circuits and Systems, Power Electronics and Systems
Software for Analog-and-Logic Circuits and Systems
Control aspects of Circuits and Systems.