{"title":"Scaling and Selecting GPU Methods for All Pairs Shortest Paths (APSP) Computations","authors":"Yang Xia, Peng Jiang, G. Agrawal, R. Ramnath","doi":"10.1109/ipdps53621.2022.00027","DOIUrl":null,"url":null,"abstract":"All Pairs Shortest Path (APSP) is one of the graph problems where the output size is significantly larger than the input size. This paper examines the issues in scaling GPU implementations for this problem beyond the memory limits. Because the existing (in-core) methods offer a complex trade-off between the overall computation complexity and the available parallelism, choosing the best out-of-core version for a given matrix is challenging. We develop three efficient out-of-core implementations, which are based on the blocked Floyd-Warshall algorithm, Johnson's algorithm, and the boundary algorithm, respectively. Next, we develop a methodology to select the best implementation for a given graph. Experimental results show that compared with an efficient multi-core APSP implementation, the out-of-core version achieves speedups of 8.22 to 12.40 for graphs with a small separator, and speedups of 2.23 to 2.79 for other sparse graphs, and our models can select the best implementation in most cases.","PeriodicalId":321801,"journal":{"name":"2022 IEEE International Parallel and Distributed Processing Symposium (IPDPS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE International Parallel and Distributed Processing Symposium (IPDPS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ipdps53621.2022.00027","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
All Pairs Shortest Path (APSP) is one of the graph problems where the output size is significantly larger than the input size. This paper examines the issues in scaling GPU implementations for this problem beyond the memory limits. Because the existing (in-core) methods offer a complex trade-off between the overall computation complexity and the available parallelism, choosing the best out-of-core version for a given matrix is challenging. We develop three efficient out-of-core implementations, which are based on the blocked Floyd-Warshall algorithm, Johnson's algorithm, and the boundary algorithm, respectively. Next, we develop a methodology to select the best implementation for a given graph. Experimental results show that compared with an efficient multi-core APSP implementation, the out-of-core version achieves speedups of 8.22 to 12.40 for graphs with a small separator, and speedups of 2.23 to 2.79 for other sparse graphs, and our models can select the best implementation in most cases.