{"title":"Using fixed memory blocks in GPUs to accelerate SpMV multiplication in probabilistic model checkers","authors":"Muhammad Hannan Khan, Shahid Khan, Osman Hasan","doi":"10.1016/j.jlamp.2025.101073","DOIUrl":null,"url":null,"abstract":"<div><h3>Context</h3><div>Probabilistic model checkers rely heavily on sparse matrix-vector multiplication (SpMV) to analyze a given probabilistic model. SpMV is a compute- and memory-intensive task. Therefore, it adversely affects the scalability of probabilistic model checkers. Graphical processing units (GPUs) have been utilized to improve the speed of SpMV. The GPU-based SpMV compute time consists of two independent factors: (Factor 1) host-to-GPU memory transfer and (Factor 2) the actual GPU-based SpMV multiplication. While many researchers have focused on the importance of Factor 1, none have explored ways to minimize its impact on overall SpMV computation time.</div></div><div><h3>Objective</h3><div>This paper proposes an approach to reduce the memory transfer-related latency by hiding the data transfer from the host to the GPU in the state-space exploration step of probabilistic model checking.</div></div><div><h3>Methods</h3><div>This is achieved in two steps: 1) reserve the complete coalesced memory in the GPU, and 2) move chunks of the sparse matrix from the host to the reserved memory during state-space exploration.</div></div><div><h3>Results</h3><div>We report on an open source prototypical implementation of our approach on a CUDA-based cuSPARSE API in <span>Storm</span>, a prominent probabilistic model checker.</div></div><div><h3>Conclusion</h3><div>We empirically demonstrate that our approach reduces memory transfer latency by at least one order of magnitude. Additionally, for most of the benchmarks, our approach achieves computation times comparable to <span>GPU-Prism</span>, a prominent probabilistic model checker.</div></div>","PeriodicalId":48797,"journal":{"name":"Journal of Logical and Algebraic Methods in Programming","volume":"147 ","pages":"Article 101073"},"PeriodicalIF":0.7000,"publicationDate":"2025-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Logical and Algebraic Methods in Programming","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2352220825000392","RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, THEORY & METHODS","Score":null,"Total":0}
引用次数: 0
Abstract
Context
Probabilistic model checkers rely heavily on sparse matrix-vector multiplication (SpMV) to analyze a given probabilistic model. SpMV is a compute- and memory-intensive task. Therefore, it adversely affects the scalability of probabilistic model checkers. Graphical processing units (GPUs) have been utilized to improve the speed of SpMV. The GPU-based SpMV compute time consists of two independent factors: (Factor 1) host-to-GPU memory transfer and (Factor 2) the actual GPU-based SpMV multiplication. While many researchers have focused on the importance of Factor 1, none have explored ways to minimize its impact on overall SpMV computation time.
Objective
This paper proposes an approach to reduce the memory transfer-related latency by hiding the data transfer from the host to the GPU in the state-space exploration step of probabilistic model checking.
Methods
This is achieved in two steps: 1) reserve the complete coalesced memory in the GPU, and 2) move chunks of the sparse matrix from the host to the reserved memory during state-space exploration.
Results
We report on an open source prototypical implementation of our approach on a CUDA-based cuSPARSE API in Storm, a prominent probabilistic model checker.
Conclusion
We empirically demonstrate that our approach reduces memory transfer latency by at least one order of magnitude. Additionally, for most of the benchmarks, our approach achieves computation times comparable to GPU-Prism, a prominent probabilistic model checker.
期刊介绍:
The Journal of Logical and Algebraic Methods in Programming is an international journal whose aim is to publish high quality, original research papers, survey and review articles, tutorial expositions, and historical studies in the areas of logical and algebraic methods and techniques for guaranteeing correctness and performability of programs and in general of computing systems. All aspects will be covered, especially theory and foundations, implementation issues, and applications involving novel ideas.