Guoqing Xiao;Li Xia;Yuedan Chen;Hongyang Chen;Wangdong Yang
{"title":"DCGG: A Dynamically Adaptive and Hardware-Software Coordinated Runtime System for GNN Acceleration on GPUs","authors":"Guoqing Xiao;Li Xia;Yuedan Chen;Hongyang Chen;Wangdong Yang","doi":"10.1109/TC.2025.3558042","DOIUrl":null,"url":null,"abstract":"Graph neural networks (GNNs) are a prominent trend in graph-based deep learning, known for their capacity to produce high-quality node embeddings. However, the existing GNN framework design is only implemented from the algorithm level, and the hardware architecture of the GPU is not fully utilized. To this end, we propose DCGG, a dynamic runtime adaptive framework, which can accelerate various GNN workloads on GPU platforms. DCGG has carried out deeper optimization work mainly in terms of load balancing and software and hardware matching. Accordingly, three optimization strategies are proposed. First, we propose dynamic 2D workload management methods and perform customized optimization based on it, effectively reducing additional memory operations. Second, a new slicing strategy is adopted, combined with hardware features, to effectively improve the efficiency of data reuse. Third, DCGG uses the Quantitative Dimension Parallel Strategy to optimize dimensions and parallel methods, greatly improving load balance and data locality. Extensive experiments demonstrate that DCGG outperforms the state-of-the-art GNN computing frameworks, such as Deep Graph Library (up to 3.10<inline-formula><tex-math>$\\boldsymbol{\\times}$</tex-math></inline-formula> faster) and GNNAdvisor (up to 2.80<inline-formula><tex-math>$\\boldsymbol{\\times}$</tex-math></inline-formula> faster), on mainstream GNN architectures across various datasets.","PeriodicalId":13087,"journal":{"name":"IEEE Transactions on Computers","volume":"74 7","pages":"2293-2305"},"PeriodicalIF":3.6000,"publicationDate":"2025-04-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Computers","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10959012/","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE","Score":null,"Total":0}
引用次数: 0
Abstract
Graph neural networks (GNNs) are a prominent trend in graph-based deep learning, known for their capacity to produce high-quality node embeddings. However, the existing GNN framework design is only implemented from the algorithm level, and the hardware architecture of the GPU is not fully utilized. To this end, we propose DCGG, a dynamic runtime adaptive framework, which can accelerate various GNN workloads on GPU platforms. DCGG has carried out deeper optimization work mainly in terms of load balancing and software and hardware matching. Accordingly, three optimization strategies are proposed. First, we propose dynamic 2D workload management methods and perform customized optimization based on it, effectively reducing additional memory operations. Second, a new slicing strategy is adopted, combined with hardware features, to effectively improve the efficiency of data reuse. Third, DCGG uses the Quantitative Dimension Parallel Strategy to optimize dimensions and parallel methods, greatly improving load balance and data locality. Extensive experiments demonstrate that DCGG outperforms the state-of-the-art GNN computing frameworks, such as Deep Graph Library (up to 3.10$\boldsymbol{\times}$ faster) and GNNAdvisor (up to 2.80$\boldsymbol{\times}$ faster), on mainstream GNN architectures across various datasets.
期刊介绍:
The IEEE Transactions on Computers is a monthly publication with a wide distribution to researchers, developers, technical managers, and educators in the computer field. It publishes papers on research in areas of current interest to the readers. These areas include, but are not limited to, the following: a) computer organizations and architectures; b) operating systems, software systems, and communication protocols; c) real-time systems and embedded systems; d) digital devices, computer components, and interconnection networks; e) specification, design, prototyping, and testing methods and tools; f) performance, fault tolerance, reliability, security, and testability; g) case studies and experimental and theoretical evaluations; and h) new and important applications and trends.