Optimizing massively parallel sparse matrix computing on ARM many-core processor

IF 2 4区 计算机科学 Q2 COMPUTER SCIENCE, THEORY & METHODS
Jiang Zheng , Jiazhi Jiang , Jiangsu Du, Dan Huang, Yutong Lu
{"title":"Optimizing massively parallel sparse matrix computing on ARM many-core processor","authors":"Jiang Zheng ,&nbsp;Jiazhi Jiang ,&nbsp;Jiangsu Du,&nbsp;Dan Huang,&nbsp;Yutong Lu","doi":"10.1016/j.parco.2023.103035","DOIUrl":null,"url":null,"abstract":"<div><p><span><span>Sparse matrix multiplication is ubiquitous in many applications such as graph processing and numerical simulation. In recent years, numerous efficient sparse matrix multiplication algorithms and computational libraries have been proposed. However, most of them are oriented to x86 or GPU platforms, while the optimization on ARM many-core platforms has not been well investigated. Our experiments show that existing sparse matrix multiplication libraries for ARM many-core CPU cannot achieve expected parallel performance. Compared with traditional multi-core CPU, ARM many-core CPU has far more cores and often adopts </span>NUMA techniques to scale the </span>memory bandwidth. Its parallel efficiency tends to be restricted by NUMA configuration, memory bandwidth cache contention, etc.</p><p>In this paper, we propose optimized implementations for sparse matrix computing on ARM many-core CPU. We propose various optimization techniques for several routines of sparse matrix multiplication to ensure coalesced access<span> of matrix elements in the memory. In detail, the optimization techniques include a fine-tuned CSR-based format for ARM architecture, co-optimization of Gustavson’s algorithm with hierarchical cache and dense array strategy to mitigate performance loss caused by handling compressed storage formats. We exploit the coarse-grained NUMA-aware strategy for inter-node parallelism and the fine-grained cache-aware strategy for intra-node parallelism to improve the parallel efficiency of sparse matrix multiplication. The evaluation shows that our implementation consistently outperforms the existing library on ARM many-core processor.</span></p></div>","PeriodicalId":54642,"journal":{"name":"Parallel Computing","volume":"117 ","pages":"Article 103035"},"PeriodicalIF":2.0000,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Parallel Computing","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0167819123000418","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, THEORY & METHODS","Score":null,"Total":0}
引用次数: 0

Abstract

Sparse matrix multiplication is ubiquitous in many applications such as graph processing and numerical simulation. In recent years, numerous efficient sparse matrix multiplication algorithms and computational libraries have been proposed. However, most of them are oriented to x86 or GPU platforms, while the optimization on ARM many-core platforms has not been well investigated. Our experiments show that existing sparse matrix multiplication libraries for ARM many-core CPU cannot achieve expected parallel performance. Compared with traditional multi-core CPU, ARM many-core CPU has far more cores and often adopts NUMA techniques to scale the memory bandwidth. Its parallel efficiency tends to be restricted by NUMA configuration, memory bandwidth cache contention, etc.

In this paper, we propose optimized implementations for sparse matrix computing on ARM many-core CPU. We propose various optimization techniques for several routines of sparse matrix multiplication to ensure coalesced access of matrix elements in the memory. In detail, the optimization techniques include a fine-tuned CSR-based format for ARM architecture, co-optimization of Gustavson’s algorithm with hierarchical cache and dense array strategy to mitigate performance loss caused by handling compressed storage formats. We exploit the coarse-grained NUMA-aware strategy for inter-node parallelism and the fine-grained cache-aware strategy for intra-node parallelism to improve the parallel efficiency of sparse matrix multiplication. The evaluation shows that our implementation consistently outperforms the existing library on ARM many-core processor.

ARM多核处理器上大规模并行稀疏矩阵计算优化
稀疏矩阵乘法在图形处理和数值模拟等应用中无处不在。近年来,人们提出了许多高效的稀疏矩阵乘法算法和计算库。然而,它们大多面向x86或GPU平台,而ARM多核平台的优化尚未得到很好的研究。实验表明,现有的用于ARM多核CPU的稀疏矩阵乘法库不能达到预期的并行性能。与传统的多核CPU相比,ARM多核CPU拥有更多的内核,并且经常采用NUMA技术来扩展内存带宽。它的并行效率往往受到NUMA配置、内存带宽缓存争用等的限制。本文提出了在ARM多核CPU上稀疏矩阵计算的优化实现。针对稀疏矩阵乘法的几种例程,提出了各种优化技术,以保证对内存中矩阵元素的合并访问。具体而言,优化技术包括针对ARM架构的基于csr的优化格式、Gustavson算法与分层缓存和密集数组策略的协同优化,以减轻处理压缩存储格式造成的性能损失。利用节点间并行的粗粒度numa感知策略和节点内并行的细粒度缓存感知策略来提高稀疏矩阵乘法的并行效率。评估表明,我们的实现始终优于现有的ARM多核处理器上的库。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Parallel Computing
Parallel Computing 工程技术-计算机:理论方法
CiteScore
3.50
自引率
7.10%
发文量
49
审稿时长
4.5 months
期刊介绍: Parallel Computing is an international journal presenting the practical use of parallel computer systems, including high performance architecture, system software, programming systems and tools, and applications. Within this context the journal covers all aspects of high-end parallel computing from single homogeneous or heterogenous computing nodes to large-scale multi-node systems. Parallel Computing features original research work and review articles as well as novel or illustrative accounts of application experience with (and techniques for) the use of parallel computers. We also welcome studies reproducing prior publications that either confirm or disprove prior published results. Particular technical areas of interest include, but are not limited to: -System software for parallel computer systems including programming languages (new languages as well as compilation techniques), operating systems (including middleware), and resource management (scheduling and load-balancing). -Enabling software including debuggers, performance tools, and system and numeric libraries. -General hardware (architecture) concepts, new technologies enabling the realization of such new concepts, and details of commercially available systems -Software engineering and productivity as it relates to parallel computing -Applications (including scientific computing, deep learning, machine learning) or tool case studies demonstrating novel ways to achieve parallelism -Performance measurement results on state-of-the-art systems -Approaches to effectively utilize large-scale parallel computing including new algorithms or algorithm analysis with demonstrated relevance to real applications using existing or next generation parallel computer architectures. -Parallel I/O systems both hardware and software -Networking technology for support of high-speed computing demonstrating the impact of high-speed computation on parallel applications
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信