Haoyang Wang, Shengbing Zhang, Kaijie Feng, Miao Wang, Zhao Yang
{"title":"基于样本的GNN训练和推理硬件加速器","authors":"Haoyang Wang, Shengbing Zhang, Kaijie Feng, Miao Wang, Zhao Yang","doi":"10.1109/ISCAS46773.2023.10182227","DOIUrl":null,"url":null,"abstract":"Graph neural networks (GNNs) operations contain a large number of irregular data operations and sparse matrix multiplications, resulting in the under-utilization of computing resources. The problem becomes even more complex and challenging when it comes to large graph training. Scaling GNN training is an effective solution. However, the current GNN operation accelerators do not support the mini-batch structure. We analyze the GNN operational characteristics from multiple aspects and take both the acceleration requirements in the GNN training and inference process into account, and then propose the SaGNN system structure. SaGNN offers multiple working modes to provide acceleration solutions for different GNN frameworks while ensuring system configurability and scalability. Compared to related works, SaGNN brings 5.0x improvement in system performance.","PeriodicalId":177320,"journal":{"name":"2023 IEEE International Symposium on Circuits and Systems (ISCAS)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-05-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"SaGNN: a Sample-based GNN Training and Inference Hardware Accelerator\",\"authors\":\"Haoyang Wang, Shengbing Zhang, Kaijie Feng, Miao Wang, Zhao Yang\",\"doi\":\"10.1109/ISCAS46773.2023.10182227\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Graph neural networks (GNNs) operations contain a large number of irregular data operations and sparse matrix multiplications, resulting in the under-utilization of computing resources. The problem becomes even more complex and challenging when it comes to large graph training. Scaling GNN training is an effective solution. However, the current GNN operation accelerators do not support the mini-batch structure. We analyze the GNN operational characteristics from multiple aspects and take both the acceleration requirements in the GNN training and inference process into account, and then propose the SaGNN system structure. SaGNN offers multiple working modes to provide acceleration solutions for different GNN frameworks while ensuring system configurability and scalability. Compared to related works, SaGNN brings 5.0x improvement in system performance.\",\"PeriodicalId\":177320,\"journal\":{\"name\":\"2023 IEEE International Symposium on Circuits and Systems (ISCAS)\",\"volume\":\"3 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-05-21\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2023 IEEE International Symposium on Circuits and Systems (ISCAS)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ISCAS46773.2023.10182227\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 IEEE International Symposium on Circuits and Systems (ISCAS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ISCAS46773.2023.10182227","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
SaGNN: a Sample-based GNN Training and Inference Hardware Accelerator
Graph neural networks (GNNs) operations contain a large number of irregular data operations and sparse matrix multiplications, resulting in the under-utilization of computing resources. The problem becomes even more complex and challenging when it comes to large graph training. Scaling GNN training is an effective solution. However, the current GNN operation accelerators do not support the mini-batch structure. We analyze the GNN operational characteristics from multiple aspects and take both the acceleration requirements in the GNN training and inference process into account, and then propose the SaGNN system structure. SaGNN offers multiple working modes to provide acceleration solutions for different GNN frameworks while ensuring system configurability and scalability. Compared to related works, SaGNN brings 5.0x improvement in system performance.