A Novel Multi-GPU Parallel Optimization Model for The Sparse Matrix-Vector Multiplication

Jiaquan Gao, Yuanshen Zhou, Kesong Wu
{"title":"A Novel Multi-GPU Parallel Optimization Model for The Sparse Matrix-Vector Multiplication","authors":"Jiaquan Gao, Yuanshen Zhou, Kesong Wu","doi":"10.1142/S0129626416400016","DOIUrl":null,"url":null,"abstract":"Accelerating the sparse matrix-vector multiplication (SpMV) on the graphics processing units (GPUs) has attracted considerable attention recently. We observe that on a specific multiple-GPU platform, the SpMV performance can usually be greatly improved when a matrix is partitioned into several blocks according to a predetermined rule and each block is assigned to a GPU with an appropriate storage format. This motivates us to propose a novel multi-GPU parallel SpMV optimization model. Our model involves two stages. In the first stage, a simple rule is defined to divide any given matrix among multiple GPUs, and then a performance model, which is independent of the problems and dependent on the resources of devices, is proposed to accurately predict the execution time of SpMV kernels. Using these models, we construct in the second stage an optimally multi-GPU parallel SpMV algorithm that is automatically and rapidly generated for the platform for any problem. Given that our model for SpMV is general, indepen...","PeriodicalId":422436,"journal":{"name":"Parallel Process. Lett.","volume":"149 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2016-12-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Parallel Process. Lett.","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1142/S0129626416400016","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Accelerating the sparse matrix-vector multiplication (SpMV) on the graphics processing units (GPUs) has attracted considerable attention recently. We observe that on a specific multiple-GPU platform, the SpMV performance can usually be greatly improved when a matrix is partitioned into several blocks according to a predetermined rule and each block is assigned to a GPU with an appropriate storage format. This motivates us to propose a novel multi-GPU parallel SpMV optimization model. Our model involves two stages. In the first stage, a simple rule is defined to divide any given matrix among multiple GPUs, and then a performance model, which is independent of the problems and dependent on the resources of devices, is proposed to accurately predict the execution time of SpMV kernels. Using these models, we construct in the second stage an optimally multi-GPU parallel SpMV algorithm that is automatically and rapidly generated for the platform for any problem. Given that our model for SpMV is general, indepen...
稀疏矩阵向量乘法的一种新型多gpu并行优化模型
在图形处理器(gpu)上加速稀疏矩阵向量乘法(SpMV)是近年来备受关注的问题。我们观察到,在特定的多GPU平台上,根据预先确定的规则将矩阵划分为几个块,并将每个块分配给具有适当存储格式的GPU,通常可以大大提高SpMV性能。这促使我们提出了一种新的多gpu并行SpMV优化模型。我们的模型包括两个阶段。在第一阶段,定义一个简单的规则将任意给定的矩阵划分到多个gpu之间,然后提出一个独立于问题而依赖于设备资源的性能模型来准确预测SpMV内核的执行时间。利用这些模型,我们在第二阶段构建了一个最优的多gpu并行SpMV算法,该算法可以自动快速地为平台生成任何问题。鉴于我们的SpMV模型是通用的,独立的…
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信