Jiaxin Wang , Kun Wang , Zhen-Guo Yan , Xiaofeng He , Tiegang Liu
{"title":"基于 GPGPU 的直接非连续伽勒金方法的异构并行实施","authors":"Jiaxin Wang , Kun Wang , Zhen-Guo Yan , Xiaofeng He , Tiegang Liu","doi":"10.1016/j.matcom.2024.09.034","DOIUrl":null,"url":null,"abstract":"<div><div>This paper implements the CUDA and hybrid CUDA/MPI parallel computation based on GPGPU heterogeneous parallel strategies for the direct discontinuous method (DDG) on 3D unstructured grids. The direct discontinuous Galerkin method inherits the compactness of the discontinuous Galerkin (DG) method, making it well-suited for large-scale parallelization. Firstly, we present the full single-GPU implementation of the three-dimensional (3D) DDG method with cell-level parallelism and face-level parallelism. Herein, all the numerical operators including volume integration, face integration (numerical fluxes), conservation variables calculation, and time iteration, are implemented by designing the corresponding kernel functions. Especially, we implement several key memory access optimization strategies, which are crucial for performance improvement. Operators merging and shared memory utilizing reduces the number of global access. Such memory Coalescing and data structure reconstruction apparently enhances the efficiency of global memory access. To align with data access pattern, we employ atomic operations to eliminate data race conditions. Furthermore, we propose a full hybrid GPU/CPU heterogeneous parallel strategy to implement multi-GPU parallelization of the DDG method, where asynchronization optimization is introduced to fully overlap communication and computation and basically eliminates the communication overhead. Finally, several numerical tests are conducted on Tesla V100 Cards to show performance of the parallelization. In addition, we utilize the NVIDIA performance testing tool, <span><math><mrow><mi>n</mi><mi>v</mi><mi>p</mi><mi>r</mi><mi>o</mi><mi>f</mi></mrow></math></span>, to evaluate multiple metrics of the kernel functions and conduct a detailed analysis of the results. In the tests of parallel scalability, the weak scaling efficiency achieves 97% from 4 to 32 GPU cards, and the strong scaling efficiency is 90% from 1 to 8 GPU cards.</div></div>","PeriodicalId":49856,"journal":{"name":"Mathematics and Computers in Simulation","volume":"229 ","pages":"Pages 362-391"},"PeriodicalIF":4.4000,"publicationDate":"2024-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"GPGPU-based heterogeneous parallel implementation of direct discontinuous Galerkin methods\",\"authors\":\"Jiaxin Wang , Kun Wang , Zhen-Guo Yan , Xiaofeng He , Tiegang Liu\",\"doi\":\"10.1016/j.matcom.2024.09.034\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>This paper implements the CUDA and hybrid CUDA/MPI parallel computation based on GPGPU heterogeneous parallel strategies for the direct discontinuous method (DDG) on 3D unstructured grids. The direct discontinuous Galerkin method inherits the compactness of the discontinuous Galerkin (DG) method, making it well-suited for large-scale parallelization. Firstly, we present the full single-GPU implementation of the three-dimensional (3D) DDG method with cell-level parallelism and face-level parallelism. Herein, all the numerical operators including volume integration, face integration (numerical fluxes), conservation variables calculation, and time iteration, are implemented by designing the corresponding kernel functions. Especially, we implement several key memory access optimization strategies, which are crucial for performance improvement. Operators merging and shared memory utilizing reduces the number of global access. Such memory Coalescing and data structure reconstruction apparently enhances the efficiency of global memory access. To align with data access pattern, we employ atomic operations to eliminate data race conditions. Furthermore, we propose a full hybrid GPU/CPU heterogeneous parallel strategy to implement multi-GPU parallelization of the DDG method, where asynchronization optimization is introduced to fully overlap communication and computation and basically eliminates the communication overhead. Finally, several numerical tests are conducted on Tesla V100 Cards to show performance of the parallelization. In addition, we utilize the NVIDIA performance testing tool, <span><math><mrow><mi>n</mi><mi>v</mi><mi>p</mi><mi>r</mi><mi>o</mi><mi>f</mi></mrow></math></span>, to evaluate multiple metrics of the kernel functions and conduct a detailed analysis of the results. In the tests of parallel scalability, the weak scaling efficiency achieves 97% from 4 to 32 GPU cards, and the strong scaling efficiency is 90% from 1 to 8 GPU cards.</div></div>\",\"PeriodicalId\":49856,\"journal\":{\"name\":\"Mathematics and Computers in Simulation\",\"volume\":\"229 \",\"pages\":\"Pages 362-391\"},\"PeriodicalIF\":4.4000,\"publicationDate\":\"2024-10-11\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Mathematics and Computers in Simulation\",\"FirstCategoryId\":\"100\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0378475424003896\",\"RegionNum\":2,\"RegionCategory\":\"数学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Mathematics and Computers in Simulation","FirstCategoryId":"100","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0378475424003896","RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS","Score":null,"Total":0}
GPGPU-based heterogeneous parallel implementation of direct discontinuous Galerkin methods
This paper implements the CUDA and hybrid CUDA/MPI parallel computation based on GPGPU heterogeneous parallel strategies for the direct discontinuous method (DDG) on 3D unstructured grids. The direct discontinuous Galerkin method inherits the compactness of the discontinuous Galerkin (DG) method, making it well-suited for large-scale parallelization. Firstly, we present the full single-GPU implementation of the three-dimensional (3D) DDG method with cell-level parallelism and face-level parallelism. Herein, all the numerical operators including volume integration, face integration (numerical fluxes), conservation variables calculation, and time iteration, are implemented by designing the corresponding kernel functions. Especially, we implement several key memory access optimization strategies, which are crucial for performance improvement. Operators merging and shared memory utilizing reduces the number of global access. Such memory Coalescing and data structure reconstruction apparently enhances the efficiency of global memory access. To align with data access pattern, we employ atomic operations to eliminate data race conditions. Furthermore, we propose a full hybrid GPU/CPU heterogeneous parallel strategy to implement multi-GPU parallelization of the DDG method, where asynchronization optimization is introduced to fully overlap communication and computation and basically eliminates the communication overhead. Finally, several numerical tests are conducted on Tesla V100 Cards to show performance of the parallelization. In addition, we utilize the NVIDIA performance testing tool, , to evaluate multiple metrics of the kernel functions and conduct a detailed analysis of the results. In the tests of parallel scalability, the weak scaling efficiency achieves 97% from 4 to 32 GPU cards, and the strong scaling efficiency is 90% from 1 to 8 GPU cards.
期刊介绍:
The aim of the journal is to provide an international forum for the dissemination of up-to-date information in the fields of the mathematics and computers, in particular (but not exclusively) as they apply to the dynamics of systems, their simulation and scientific computation in general. Published material ranges from short, concise research papers to more general tutorial articles.
Mathematics and Computers in Simulation, published monthly, is the official organ of IMACS, the International Association for Mathematics and Computers in Simulation (Formerly AICA). This Association, founded in 1955 and legally incorporated in 1956 is a member of FIACC (the Five International Associations Coordinating Committee), together with IFIP, IFAV, IFORS and IMEKO.
Topics covered by the journal include mathematical tools in:
•The foundations of systems modelling
•Numerical analysis and the development of algorithms for simulation
They also include considerations about computer hardware for simulation and about special software and compilers.
The journal also publishes articles concerned with specific applications of modelling and simulation in science and engineering, with relevant applied mathematics, the general philosophy of systems simulation, and their impact on disciplinary and interdisciplinary research.
The journal includes a Book Review section -- and a "News on IMACS" section that contains a Calendar of future Conferences/Events and other information about the Association.