{"title":"Bit-Sparsity Aware Acceleration With Compact CSD Code on Generic Matrix Multiplication","authors":"Zixuan Zhu;Xiaolong Zhou;Chundong Wang;Li Tian;Zunkai Huang;Yongxin Zhu","doi":"10.1109/TC.2024.3483632","DOIUrl":null,"url":null,"abstract":"The ever-increasing demand for matrix multiplication in artificial intelligence (AI) and generic computing emphasizes the necessity of efficient computing power accommodating both floating-point (FP) and quantized integer (QINT). While state-of-the-art bit-sparsity-aware acceleration techniques have demonstrated impressive performance and efficiency in neural networks through software-driven methods such as pruning and quantization, these approaches are not always feasible in typical generic computing scenarios. In this paper, we propose Bit-Cigma, a hardware-centric architecture that leverages bit-sparsity to accelerate generic matrix multiplication. Bit-Cigma features (1) CCSD encoding, an optimized on-chip sparsification technique based on canonical signed digit (CSD) representation; (2) segmented dot product, a multi-stage exponent matching technique for long FP vectors; and (3) the versatility to efficiently process both FP and QINT data types. CCSD encoding halves the cost of CSD encoding while achieving optimal bit-sparsity, and segmented dot product improves both accuracy and throughput. Bit-Cigma cores are implemented using 65 nm technology at 1 GHz, demonstrating substantial gains in performance and efficiency for both FP and QINT configurations. Compared to state-of-the-art Bitlet, Bit-Cigma achieves 3.2<inline-formula><tex-math>$\\boldsymbol{\\times}$</tex-math></inline-formula> performance, 6.1<inline-formula><tex-math>$\\boldsymbol{\\times}$</tex-math></inline-formula> area efficiency, and 15.3<inline-formula><tex-math>$\\boldsymbol{\\times}$</tex-math></inline-formula> energy efficiency when processing FP32 data while ensuring zero computing error.","PeriodicalId":13087,"journal":{"name":"IEEE Transactions on Computers","volume":"74 2","pages":"414-426"},"PeriodicalIF":3.6000,"publicationDate":"2024-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Computers","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10723799/","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE","Score":null,"Total":0}
引用次数: 0
Abstract
The ever-increasing demand for matrix multiplication in artificial intelligence (AI) and generic computing emphasizes the necessity of efficient computing power accommodating both floating-point (FP) and quantized integer (QINT). While state-of-the-art bit-sparsity-aware acceleration techniques have demonstrated impressive performance and efficiency in neural networks through software-driven methods such as pruning and quantization, these approaches are not always feasible in typical generic computing scenarios. In this paper, we propose Bit-Cigma, a hardware-centric architecture that leverages bit-sparsity to accelerate generic matrix multiplication. Bit-Cigma features (1) CCSD encoding, an optimized on-chip sparsification technique based on canonical signed digit (CSD) representation; (2) segmented dot product, a multi-stage exponent matching technique for long FP vectors; and (3) the versatility to efficiently process both FP and QINT data types. CCSD encoding halves the cost of CSD encoding while achieving optimal bit-sparsity, and segmented dot product improves both accuracy and throughput. Bit-Cigma cores are implemented using 65 nm technology at 1 GHz, demonstrating substantial gains in performance and efficiency for both FP and QINT configurations. Compared to state-of-the-art Bitlet, Bit-Cigma achieves 3.2$\boldsymbol{\times}$ performance, 6.1$\boldsymbol{\times}$ area efficiency, and 15.3$\boldsymbol{\times}$ energy efficiency when processing FP32 data while ensuring zero computing error.
期刊介绍:
The IEEE Transactions on Computers is a monthly publication with a wide distribution to researchers, developers, technical managers, and educators in the computer field. It publishes papers on research in areas of current interest to the readers. These areas include, but are not limited to, the following: a) computer organizations and architectures; b) operating systems, software systems, and communication protocols; c) real-time systems and embedded systems; d) digital devices, computer components, and interconnection networks; e) specification, design, prototyping, and testing methods and tools; f) performance, fault tolerance, reliability, security, and testability; g) case studies and experimental and theoretical evaluations; and h) new and important applications and trends.