{"title":"Performance portability of sparse matrix–vector multiplication implemented using OpenMP, OpenACC and SYCL","authors":"Kinga Stec, Przemysław Stpiczyński","doi":"10.1016/j.future.2025.107825","DOIUrl":null,"url":null,"abstract":"<div><div>The aim of this paper is to study the performance portability of OpenMP, OpenACC and SYCL implementations of sparse matrix–vector product (SpMV) and its extended version in which the dot product of the input vector and the result is also calculated, for CSR and BSR storage formats, on Intel and AMD CPUs and NVIDIA GPU platforms. We compare it with the performance portability of much more sophisticated implementations provided by the vendors in their Intel oneAPI MKL and NVIDIA cuSPARSE libraries. Using the reformulated performance portability metric <figure><img></figure> we show how it changes for various sparse matrices and which portable implementation and format achieve better performance portability. Numerical experiments show that the considered portable implementations for the CSR format usually achieve better performance than for the BSR format. On GPU, CSR OpenACC implementations for SpMV and SpMV-DOT tend to be the best. On CPU, CSR OpenMP implementation usually gives the best results for SpMV-DOT, while CSR OpenMP and BSR MKL achieve the best results for a similar number of matrices.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"170 ","pages":"Article 107825"},"PeriodicalIF":6.2000,"publicationDate":"2025-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Future Generation Computer Systems-The International Journal of Escience","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0167739X25001207","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, THEORY & METHODS","Score":null,"Total":0}
引用次数: 0
Abstract
The aim of this paper is to study the performance portability of OpenMP, OpenACC and SYCL implementations of sparse matrix–vector product (SpMV) and its extended version in which the dot product of the input vector and the result is also calculated, for CSR and BSR storage formats, on Intel and AMD CPUs and NVIDIA GPU platforms. We compare it with the performance portability of much more sophisticated implementations provided by the vendors in their Intel oneAPI MKL and NVIDIA cuSPARSE libraries. Using the reformulated performance portability metric we show how it changes for various sparse matrices and which portable implementation and format achieve better performance portability. Numerical experiments show that the considered portable implementations for the CSR format usually achieve better performance than for the BSR format. On GPU, CSR OpenACC implementations for SpMV and SpMV-DOT tend to be the best. On CPU, CSR OpenMP implementation usually gives the best results for SpMV-DOT, while CSR OpenMP and BSR MKL achieve the best results for a similar number of matrices.
期刊介绍:
Computing infrastructures and systems are constantly evolving, resulting in increasingly complex and collaborative scientific applications. To cope with these advancements, there is a growing need for collaborative tools that can effectively map, control, and execute these applications.
Furthermore, with the explosion of Big Data, there is a requirement for innovative methods and infrastructures to collect, analyze, and derive meaningful insights from the vast amount of data generated. This necessitates the integration of computational and storage capabilities, databases, sensors, and human collaboration.
Future Generation Computer Systems aims to pioneer advancements in distributed systems, collaborative environments, high-performance computing, and Big Data analytics. It strives to stay at the forefront of developments in grids, clouds, and the Internet of Things (IoT) to effectively address the challenges posed by these wide-area, fully distributed sensing and computing systems.