{"title":"利用处理器和存储器协同计算框架消除SpMV中的数据发散","authors":"Zhang Dunbo;Shen Li;Lu Kai","doi":"10.1109/TC.2025.3547162","DOIUrl":null,"url":null,"abstract":"Sparse matrix-vector multiplication (SpMV) is a performance-critical kernel in various application domains, including high-performance computing, artificial intelligence, and big data. However, the performance of SpMV on SIMD devices is greatly affected by data divergences. To address this issue, we propose an In-SRAM Computing-based Processor Memory Co-Compute SpMV optimization framework that divides the SpMV kernel into two stages: a compute-intensive stage and a control-intensive stage. For optimizing the first stage, we leverage the parallel random access feature of multi-bank SRAM to eliminate overheads caused by memory divergences and use the Aggregate Table (AT) to reduce bank conflicts. For optimizing the second stage, we convert control divergences into memory divergences and utilize the Accumulate ScratchPad Memory (AccSPM) for executing reduction operations while eliminating overheads caused by memory divergences. Experimental results demonstrate that our solution achieves significant throughput increase over highly optimized vector SpMV kernels under CSR, CSR5, and CVR compression formats with performance speedups up to 4.74x, 5.58x, and 4.83x (3.11x, 3.04x, and 3.07x on average), respectively.","PeriodicalId":13087,"journal":{"name":"IEEE Transactions on Computers","volume":"74 6","pages":"2017-2030"},"PeriodicalIF":3.6000,"publicationDate":"2025-02-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Eliminate Data Divergence in SpMV via Processor and Memory Co-Computing Framework\",\"authors\":\"Zhang Dunbo;Shen Li;Lu Kai\",\"doi\":\"10.1109/TC.2025.3547162\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Sparse matrix-vector multiplication (SpMV) is a performance-critical kernel in various application domains, including high-performance computing, artificial intelligence, and big data. However, the performance of SpMV on SIMD devices is greatly affected by data divergences. To address this issue, we propose an In-SRAM Computing-based Processor Memory Co-Compute SpMV optimization framework that divides the SpMV kernel into two stages: a compute-intensive stage and a control-intensive stage. For optimizing the first stage, we leverage the parallel random access feature of multi-bank SRAM to eliminate overheads caused by memory divergences and use the Aggregate Table (AT) to reduce bank conflicts. For optimizing the second stage, we convert control divergences into memory divergences and utilize the Accumulate ScratchPad Memory (AccSPM) for executing reduction operations while eliminating overheads caused by memory divergences. Experimental results demonstrate that our solution achieves significant throughput increase over highly optimized vector SpMV kernels under CSR, CSR5, and CVR compression formats with performance speedups up to 4.74x, 5.58x, and 4.83x (3.11x, 3.04x, and 3.07x on average), respectively.\",\"PeriodicalId\":13087,\"journal\":{\"name\":\"IEEE Transactions on Computers\",\"volume\":\"74 6\",\"pages\":\"2017-2030\"},\"PeriodicalIF\":3.6000,\"publicationDate\":\"2025-02-28\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Transactions on Computers\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10908574/\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Computers","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10908574/","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE","Score":null,"Total":0}
Eliminate Data Divergence in SpMV via Processor and Memory Co-Computing Framework
Sparse matrix-vector multiplication (SpMV) is a performance-critical kernel in various application domains, including high-performance computing, artificial intelligence, and big data. However, the performance of SpMV on SIMD devices is greatly affected by data divergences. To address this issue, we propose an In-SRAM Computing-based Processor Memory Co-Compute SpMV optimization framework that divides the SpMV kernel into two stages: a compute-intensive stage and a control-intensive stage. For optimizing the first stage, we leverage the parallel random access feature of multi-bank SRAM to eliminate overheads caused by memory divergences and use the Aggregate Table (AT) to reduce bank conflicts. For optimizing the second stage, we convert control divergences into memory divergences and utilize the Accumulate ScratchPad Memory (AccSPM) for executing reduction operations while eliminating overheads caused by memory divergences. Experimental results demonstrate that our solution achieves significant throughput increase over highly optimized vector SpMV kernels under CSR, CSR5, and CVR compression formats with performance speedups up to 4.74x, 5.58x, and 4.83x (3.11x, 3.04x, and 3.07x on average), respectively.
期刊介绍:
The IEEE Transactions on Computers is a monthly publication with a wide distribution to researchers, developers, technical managers, and educators in the computer field. It publishes papers on research in areas of current interest to the readers. These areas include, but are not limited to, the following: a) computer organizations and architectures; b) operating systems, software systems, and communication protocols; c) real-time systems and embedded systems; d) digital devices, computer components, and interconnection networks; e) specification, design, prototyping, and testing methods and tools; f) performance, fault tolerance, reliability, security, and testability; g) case studies and experimental and theoretical evaluations; and h) new and important applications and trends.