Zhiyuan Zhang, Zhihua Fan, Wenming Li, Yuhang Qiu, Zhen Wang, Xiaochun Ye, Dongrui Fan, Xuejun An
{"title":"Accelerating tensor multiplication by exploring hybrid product with hardware and software co-design","authors":"Zhiyuan Zhang, Zhihua Fan, Wenming Li, Yuhang Qiu, Zhen Wang, Xiaochun Ye, Dongrui Fan, Xuejun An","doi":"10.1016/j.sysarc.2025.103333","DOIUrl":null,"url":null,"abstract":"<div><div>Tensor multiplication holds a pivotal position in numerous applications. The existing accelerators predominantly rely on inner or outer products for their computational strategies, yet these methodologies encounter obstacles such as excessive storage overhead, underutilization of parallelism, and merging costs. To tackle the challenges, we propose an acceleration technique that integrates a hybrid product approach with a tailored hardware. Our design can accommodate tensor multiplications of various scales, boasting exceptional scalability. First, we employ a hybrid product approach for tensor multiplications, strategically leveraging various methods – including inner, outer, and Hadamard products – to optimize different stages of submatrices computations. Second, we devise a dedicated architecture that seamlessly aligns with hybrid product, leveraging dataflow paradigm to map tensor multiplication efficiently onto the hardware. Third, we design a sliding-window partial reuse FIFO (SWFIFO), alongside a data reorder and scheduling unit to accelerate data retrieval. For general matrix multiplication (GEMM), our design demonstrates an average speedup of <span><math><mrow><mn>17</mn><mo>.</mo><mn>62</mn><mo>×</mo></mrow></math></span> and 9.47% energy consumption over Nvidia’s V100 GPU. Furthermore, it surpasses Google’s TPU (size of 256 × 256) by an average of <span><math><mrow><mn>3</mn><mo>.</mo><mn>76</mn><mo>×</mo></mrow></math></span>, TPUv2 (size of 128 × 128) by <span><math><mrow><mn>3</mn><mo>.</mo><mn>19</mn><mo>×</mo></mrow></math></span> and Eyeriss by <span><math><mrow><mn>3</mn><mo>.</mo><mn>8</mn><mo>×</mo></mrow></math></span>. When evaluated on eight neural network models, our design yields a performance boost of <span><math><mrow><mn>2</mn><mo>.</mo><mn>89</mn><mo>×</mo></mrow></math></span> over TPU and <span><math><mrow><mn>2</mn><mo>.</mo><mn>19</mn><mo>×</mo></mrow></math></span> over Eyeriss.</div></div>","PeriodicalId":50027,"journal":{"name":"Journal of Systems Architecture","volume":"159 ","pages":"Article 103333"},"PeriodicalIF":3.7000,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Systems Architecture","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1383762125000050","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE","Score":null,"Total":0}
引用次数: 0
Abstract
Tensor multiplication holds a pivotal position in numerous applications. The existing accelerators predominantly rely on inner or outer products for their computational strategies, yet these methodologies encounter obstacles such as excessive storage overhead, underutilization of parallelism, and merging costs. To tackle the challenges, we propose an acceleration technique that integrates a hybrid product approach with a tailored hardware. Our design can accommodate tensor multiplications of various scales, boasting exceptional scalability. First, we employ a hybrid product approach for tensor multiplications, strategically leveraging various methods – including inner, outer, and Hadamard products – to optimize different stages of submatrices computations. Second, we devise a dedicated architecture that seamlessly aligns with hybrid product, leveraging dataflow paradigm to map tensor multiplication efficiently onto the hardware. Third, we design a sliding-window partial reuse FIFO (SWFIFO), alongside a data reorder and scheduling unit to accelerate data retrieval. For general matrix multiplication (GEMM), our design demonstrates an average speedup of and 9.47% energy consumption over Nvidia’s V100 GPU. Furthermore, it surpasses Google’s TPU (size of 256 × 256) by an average of , TPUv2 (size of 128 × 128) by and Eyeriss by . When evaluated on eight neural network models, our design yields a performance boost of over TPU and over Eyeriss.
期刊介绍:
The Journal of Systems Architecture: Embedded Software Design (JSA) is a journal covering all design and architectural aspects related to embedded systems and software. It ranges from the microarchitecture level via the system software level up to the application-specific architecture level. Aspects such as real-time systems, operating systems, FPGA programming, programming languages, communications (limited to analysis and the software stack), mobile systems, parallel and distributed architectures as well as additional subjects in the computer and system architecture area will fall within the scope of this journal. Technology will not be a main focus, but its use and relevance to particular designs will be. Case studies are welcome but must contribute more than just a design for a particular piece of software.
Design automation of such systems including methodologies, techniques and tools for their design as well as novel designs of software components fall within the scope of this journal. Novel applications that use embedded systems are also central in this journal. While hardware is not a part of this journal hardware/software co-design methods that consider interplay between software and hardware components with and emphasis on software are also relevant here.