{"title":"Efficient and High-Performance Sparse Matrix-Vector Multiplication on a Many-Core Array","authors":"Peiyao Shi, Aaron Stillmaker, B. Baas","doi":"10.1109/MCSoC57363.2022.00038","DOIUrl":null,"url":null,"abstract":"Sparse matrix-vector multiplication (SpMV) is a critical operation in scientific computing, engineering, and other applications. Eight functionally-equivalent SpMV implementations are created for a fine-grained many-core platform with independent shared memory modules. These implementations are compared with a general-purpose processor (Intel Core-i7 3720QM) and a graphics processing unit (GPU, NVIDIA Quadro 620) and results are scaled to 32 nm CMOS. The performance (throughput per chip area) for all three platforms is compared when operating on a set of seven unstructured sparse matrices of varying dimensions up to 3.6 billion elements. The many-core implementations show a $54\\times$ greater performance than the general-purpose processor, and $40\\times$ greater performance than the GPU.","PeriodicalId":150801,"journal":{"name":"2022 IEEE 15th International Symposium on Embedded Multicore/Many-core Systems-on-Chip (MCSoC)","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE 15th International Symposium on Embedded Multicore/Many-core Systems-on-Chip (MCSoC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/MCSoC57363.2022.00038","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Sparse matrix-vector multiplication (SpMV) is a critical operation in scientific computing, engineering, and other applications. Eight functionally-equivalent SpMV implementations are created for a fine-grained many-core platform with independent shared memory modules. These implementations are compared with a general-purpose processor (Intel Core-i7 3720QM) and a graphics processing unit (GPU, NVIDIA Quadro 620) and results are scaled to 32 nm CMOS. The performance (throughput per chip area) for all three platforms is compared when operating on a set of seven unstructured sparse matrices of varying dimensions up to 3.6 billion elements. The many-core implementations show a $54\times$ greater performance than the general-purpose processor, and $40\times$ greater performance than the GPU.