基于样本的GNN训练和推理硬件加速器

Haoyang Wang, Shengbing Zhang, Kaijie Feng, Miao Wang, Zhao Yang
{"title":"基于样本的GNN训练和推理硬件加速器","authors":"Haoyang Wang, Shengbing Zhang, Kaijie Feng, Miao Wang, Zhao Yang","doi":"10.1109/ISCAS46773.2023.10182227","DOIUrl":null,"url":null,"abstract":"Graph neural networks (GNNs) operations contain a large number of irregular data operations and sparse matrix multiplications, resulting in the under-utilization of computing resources. The problem becomes even more complex and challenging when it comes to large graph training. Scaling GNN training is an effective solution. However, the current GNN operation accelerators do not support the mini-batch structure. We analyze the GNN operational characteristics from multiple aspects and take both the acceleration requirements in the GNN training and inference process into account, and then propose the SaGNN system structure. SaGNN offers multiple working modes to provide acceleration solutions for different GNN frameworks while ensuring system configurability and scalability. Compared to related works, SaGNN brings 5.0x improvement in system performance.","PeriodicalId":177320,"journal":{"name":"2023 IEEE International Symposium on Circuits and Systems (ISCAS)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-05-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"SaGNN: a Sample-based GNN Training and Inference Hardware Accelerator\",\"authors\":\"Haoyang Wang, Shengbing Zhang, Kaijie Feng, Miao Wang, Zhao Yang\",\"doi\":\"10.1109/ISCAS46773.2023.10182227\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Graph neural networks (GNNs) operations contain a large number of irregular data operations and sparse matrix multiplications, resulting in the under-utilization of computing resources. The problem becomes even more complex and challenging when it comes to large graph training. Scaling GNN training is an effective solution. However, the current GNN operation accelerators do not support the mini-batch structure. We analyze the GNN operational characteristics from multiple aspects and take both the acceleration requirements in the GNN training and inference process into account, and then propose the SaGNN system structure. SaGNN offers multiple working modes to provide acceleration solutions for different GNN frameworks while ensuring system configurability and scalability. Compared to related works, SaGNN brings 5.0x improvement in system performance.\",\"PeriodicalId\":177320,\"journal\":{\"name\":\"2023 IEEE International Symposium on Circuits and Systems (ISCAS)\",\"volume\":\"3 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-05-21\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2023 IEEE International Symposium on Circuits and Systems (ISCAS)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ISCAS46773.2023.10182227\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 IEEE International Symposium on Circuits and Systems (ISCAS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ISCAS46773.2023.10182227","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

图神经网络(Graph neural network, gnn)的运算包含大量不规则数据运算和稀疏矩阵乘法,导致计算资源利用率不足。当涉及到大型图训练时,问题变得更加复杂和具有挑战性。缩放GNN训练是一个有效的解决方案。然而,目前的GNN操作加速器不支持小批量结构。从多个方面分析了GNN的运行特性,同时考虑了GNN训练和推理过程中的加速度要求,提出了SaGNN的系统结构。SaGNN提供多种工作模式,为不同的GNN框架提供加速解决方案,同时确保系统的可配置性和可扩展性。与相关工作相比,SaGNN使系统性能提高了5.0倍。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
SaGNN: a Sample-based GNN Training and Inference Hardware Accelerator
Graph neural networks (GNNs) operations contain a large number of irregular data operations and sparse matrix multiplications, resulting in the under-utilization of computing resources. The problem becomes even more complex and challenging when it comes to large graph training. Scaling GNN training is an effective solution. However, the current GNN operation accelerators do not support the mini-batch structure. We analyze the GNN operational characteristics from multiple aspects and take both the acceleration requirements in the GNN training and inference process into account, and then propose the SaGNN system structure. SaGNN offers multiple working modes to provide acceleration solutions for different GNN frameworks while ensuring system configurability and scalability. Compared to related works, SaGNN brings 5.0x improvement in system performance.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信