Lu Ma;Zeang Sheng;Xunkai Li;Xinyi Gao;Zhezheng Hao;Ling Yang;Xiaonan Nie;Jiawei Jiang;Wentao Zhang;Bin Cui
{"title":"GNNs中的加速算法综述","authors":"Lu Ma;Zeang Sheng;Xunkai Li;Xinyi Gao;Zhezheng Hao;Ling Yang;Xiaonan Nie;Jiawei Jiang;Wentao Zhang;Bin Cui","doi":"10.1109/TKDE.2025.3540787","DOIUrl":null,"url":null,"abstract":"Graph Neural Networks have demonstrated remarkable effectiveness in various graph-based tasks, but their inefficiency in training and inference poses significant challenges for scaling to real-world, large-scale applications. To address these challenges, a plethora of algorithms have been developed to accelerate GNN training and inference, garnering substantial interest from the research community. This paper presents a systematic review of these acceleration algorithms, categorizing them into three main topics: training acceleration, inference acceleration, and execution acceleration. For training acceleration, we discuss techniques like graph sampling and GNN simplification. In inference acceleration, we focus on knowledge distillation, GNN quantization, and GNN pruning. For execution acceleration, we explore GNN binarization and graph condensation. Additionally, we review several libraries related to GNN acceleration, including our Scalable Graph Learning library, and propose future research directions.","PeriodicalId":13496,"journal":{"name":"IEEE Transactions on Knowledge and Data Engineering","volume":"37 6","pages":"3173-3192"},"PeriodicalIF":8.9000,"publicationDate":"2025-02-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Acceleration Algorithms in GNNs: A Survey\",\"authors\":\"Lu Ma;Zeang Sheng;Xunkai Li;Xinyi Gao;Zhezheng Hao;Ling Yang;Xiaonan Nie;Jiawei Jiang;Wentao Zhang;Bin Cui\",\"doi\":\"10.1109/TKDE.2025.3540787\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Graph Neural Networks have demonstrated remarkable effectiveness in various graph-based tasks, but their inefficiency in training and inference poses significant challenges for scaling to real-world, large-scale applications. To address these challenges, a plethora of algorithms have been developed to accelerate GNN training and inference, garnering substantial interest from the research community. This paper presents a systematic review of these acceleration algorithms, categorizing them into three main topics: training acceleration, inference acceleration, and execution acceleration. For training acceleration, we discuss techniques like graph sampling and GNN simplification. In inference acceleration, we focus on knowledge distillation, GNN quantization, and GNN pruning. For execution acceleration, we explore GNN binarization and graph condensation. Additionally, we review several libraries related to GNN acceleration, including our Scalable Graph Learning library, and propose future research directions.\",\"PeriodicalId\":13496,\"journal\":{\"name\":\"IEEE Transactions on Knowledge and Data Engineering\",\"volume\":\"37 6\",\"pages\":\"3173-3192\"},\"PeriodicalIF\":8.9000,\"publicationDate\":\"2025-02-11\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Transactions on Knowledge and Data Engineering\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10882936/\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Knowledge and Data Engineering","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10882936/","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
Graph Neural Networks have demonstrated remarkable effectiveness in various graph-based tasks, but their inefficiency in training and inference poses significant challenges for scaling to real-world, large-scale applications. To address these challenges, a plethora of algorithms have been developed to accelerate GNN training and inference, garnering substantial interest from the research community. This paper presents a systematic review of these acceleration algorithms, categorizing them into three main topics: training acceleration, inference acceleration, and execution acceleration. For training acceleration, we discuss techniques like graph sampling and GNN simplification. In inference acceleration, we focus on knowledge distillation, GNN quantization, and GNN pruning. For execution acceleration, we explore GNN binarization and graph condensation. Additionally, we review several libraries related to GNN acceleration, including our Scalable Graph Learning library, and propose future research directions.
期刊介绍:
The IEEE Transactions on Knowledge and Data Engineering encompasses knowledge and data engineering aspects within computer science, artificial intelligence, electrical engineering, computer engineering, and related fields. It provides an interdisciplinary platform for disseminating new developments in knowledge and data engineering and explores the practicality of these concepts in both hardware and software. Specific areas covered include knowledge-based and expert systems, AI techniques for knowledge and data management, tools, and methodologies, distributed processing, real-time systems, architectures, data management practices, database design, query languages, security, fault tolerance, statistical databases, algorithms, performance evaluation, and applications.