Performance Modelling of Graph Neural Networks

Pranjal Naman, Yogesh L. Simmhan
{"title":"Performance Modelling of Graph Neural Networks","authors":"Pranjal Naman, Yogesh L. Simmhan","doi":"10.1109/CCGridW59191.2023.00076","DOIUrl":null,"url":null,"abstract":"Recent years have witnessed a rapid rise in the popularity of Graph Neural Networks (GNNs) that address a wide variety of domains using different architectures. However, as relevant graph datasets become diverse in size, sparsity and features, it becomes important to quantify the effect of different graph properties on the training time for different GNN architectures. This will allow us to design compute-aware GNN architectures for specific problems, and further extend this for distributed training. In this paper, we formulate the calculation of the Floating Point Operations (FLOPs) required for a single forward pass through layers of a GNN. We report the analytical calculations for GraphConv and GraphSAGE models and compare against their profiling results for 10 graphs with varying properties. We observe that there is a strong correlation between our theoretical expectation of the number of FLOPs and the experimental execution time for a forward pass.","PeriodicalId":341115,"journal":{"name":"2023 IEEE/ACM 23rd International Symposium on Cluster, Cloud and Internet Computing Workshops (CCGridW)","volume":"54 ","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 IEEE/ACM 23rd International Symposium on Cluster, Cloud and Internet Computing Workshops (CCGridW)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CCGridW59191.2023.00076","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Recent years have witnessed a rapid rise in the popularity of Graph Neural Networks (GNNs) that address a wide variety of domains using different architectures. However, as relevant graph datasets become diverse in size, sparsity and features, it becomes important to quantify the effect of different graph properties on the training time for different GNN architectures. This will allow us to design compute-aware GNN architectures for specific problems, and further extend this for distributed training. In this paper, we formulate the calculation of the Floating Point Operations (FLOPs) required for a single forward pass through layers of a GNN. We report the analytical calculations for GraphConv and GraphSAGE models and compare against their profiling results for 10 graphs with varying properties. We observe that there is a strong correlation between our theoretical expectation of the number of FLOPs and the experimental execution time for a forward pass.
图神经网络的性能建模
近年来,图神经网络(gnn)的普及程度迅速上升,它使用不同的架构处理各种各样的领域。然而,随着相关图数据集在大小、稀疏度和特征上的多样化,量化不同图属性对不同GNN架构训练时间的影响变得非常重要。这将允许我们为特定问题设计计算感知的GNN架构,并进一步将其扩展到分布式训练。在本文中,我们制定了浮点运算(FLOPs)的计算所需的一个单一的向前通过层的GNN。我们报告了GraphConv和GraphSAGE模型的分析计算,并比较了10个具有不同属性的图的分析结果。我们观察到,FLOPs数量的理论期望与正向通过的实验执行时间之间存在很强的相关性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信