压缩稀疏-稀疏矩阵积的高效算法研究

S. Ezouaoui, O. Hamdi-Larbi, Z. Mahjoub
{"title":"压缩稀疏-稀疏矩阵积的高效算法研究","authors":"S. Ezouaoui, O. Hamdi-Larbi, Z. Mahjoub","doi":"10.1109/HPCS.2017.101","DOIUrl":null,"url":null,"abstract":"We study the sparse matrix product problem where the input matrices are sparse. Starting with an original DO- loop nest structured algorithm, different versions involving body kernels such as GAXPY, AXPY and DOT are generated by the loop interchange technique. We particularly focus on the GAXPY- Row body kernel where the matrices are acceded row-wise. Various versions corresponding to the most used sparse matrix compression formats are designed. We then derive other versions by applying improving techniques such as loop invariant motion and loop unrolling. A theoretical multi-fold performance study permits to establish accurate comparisons between the different versions. Our contribution is validated through experiments achieved on two input sets i.e. a set of randomly generated matrices and a set of benchmark matrices of different sizes and densities. This permitted to notice that the improvement procedure led to an efficient version dramatically reducing the run time up to 98%. Our algorithms were also compared with kernels from NIST Sparse Blas, CSparse and SPARSKIT2 libraries.","PeriodicalId":115758,"journal":{"name":"2017 International Conference on High Performance Computing & Simulation (HPCS)","volume":"113 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2017-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":"{\"title\":\"Towards Efficient Algorithms for Compressed Sparse-Sparse Matrix Product\",\"authors\":\"S. Ezouaoui, O. Hamdi-Larbi, Z. Mahjoub\",\"doi\":\"10.1109/HPCS.2017.101\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"We study the sparse matrix product problem where the input matrices are sparse. Starting with an original DO- loop nest structured algorithm, different versions involving body kernels such as GAXPY, AXPY and DOT are generated by the loop interchange technique. We particularly focus on the GAXPY- Row body kernel where the matrices are acceded row-wise. Various versions corresponding to the most used sparse matrix compression formats are designed. We then derive other versions by applying improving techniques such as loop invariant motion and loop unrolling. A theoretical multi-fold performance study permits to establish accurate comparisons between the different versions. Our contribution is validated through experiments achieved on two input sets i.e. a set of randomly generated matrices and a set of benchmark matrices of different sizes and densities. This permitted to notice that the improvement procedure led to an efficient version dramatically reducing the run time up to 98%. Our algorithms were also compared with kernels from NIST Sparse Blas, CSparse and SPARSKIT2 libraries.\",\"PeriodicalId\":115758,\"journal\":{\"name\":\"2017 International Conference on High Performance Computing & Simulation (HPCS)\",\"volume\":\"113 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2017-07-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"3\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2017 International Conference on High Performance Computing & Simulation (HPCS)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/HPCS.2017.101\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2017 International Conference on High Performance Computing & Simulation (HPCS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/HPCS.2017.101","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3

摘要

研究了输入矩阵为稀疏的稀疏矩阵积问题。从原始的DO- loop巢结构算法开始,利用循环交换技术生成了涉及主体核的不同版本,如GAXPY、AXPY和DOT。我们特别关注GAXPY-行体内核,其中矩阵是按行递增的。针对最常用的稀疏矩阵压缩格式设计了不同的版本。然后,我们通过应用循环不变运动和循环展开等改进技术推导出其他版本。理论上的多重性能研究允许在不同版本之间建立准确的比较。我们的贡献通过两个输入集的实验得到验证,即一组随机生成的矩阵和一组不同大小和密度的基准矩阵。这允许我们注意到,改进过程导致了一个有效的版本,大大减少了运行时间高达98%。我们的算法还与NIST Sparse Blas、CSparse和SPARSKIT2库中的内核进行了比较。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Towards Efficient Algorithms for Compressed Sparse-Sparse Matrix Product
We study the sparse matrix product problem where the input matrices are sparse. Starting with an original DO- loop nest structured algorithm, different versions involving body kernels such as GAXPY, AXPY and DOT are generated by the loop interchange technique. We particularly focus on the GAXPY- Row body kernel where the matrices are acceded row-wise. Various versions corresponding to the most used sparse matrix compression formats are designed. We then derive other versions by applying improving techniques such as loop invariant motion and loop unrolling. A theoretical multi-fold performance study permits to establish accurate comparisons between the different versions. Our contribution is validated through experiments achieved on two input sets i.e. a set of randomly generated matrices and a set of benchmark matrices of different sizes and densities. This permitted to notice that the improvement procedure led to an efficient version dramatically reducing the run time up to 98%. Our algorithms were also compared with kernels from NIST Sparse Blas, CSparse and SPARSKIT2 libraries.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信