{"title":"Performance Modelling of Graph Neural Networks","authors":"Pranjal Naman, Yogesh L. Simmhan","doi":"10.1109/CCGridW59191.2023.00076","DOIUrl":null,"url":null,"abstract":"Recent years have witnessed a rapid rise in the popularity of Graph Neural Networks (GNNs) that address a wide variety of domains using different architectures. However, as relevant graph datasets become diverse in size, sparsity and features, it becomes important to quantify the effect of different graph properties on the training time for different GNN architectures. This will allow us to design compute-aware GNN architectures for specific problems, and further extend this for distributed training. In this paper, we formulate the calculation of the Floating Point Operations (FLOPs) required for a single forward pass through layers of a GNN. We report the analytical calculations for GraphConv and GraphSAGE models and compare against their profiling results for 10 graphs with varying properties. We observe that there is a strong correlation between our theoretical expectation of the number of FLOPs and the experimental execution time for a forward pass.","PeriodicalId":341115,"journal":{"name":"2023 IEEE/ACM 23rd International Symposium on Cluster, Cloud and Internet Computing Workshops (CCGridW)","volume":"54 ","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 IEEE/ACM 23rd International Symposium on Cluster, Cloud and Internet Computing Workshops (CCGridW)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CCGridW59191.2023.00076","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Recent years have witnessed a rapid rise in the popularity of Graph Neural Networks (GNNs) that address a wide variety of domains using different architectures. However, as relevant graph datasets become diverse in size, sparsity and features, it becomes important to quantify the effect of different graph properties on the training time for different GNN architectures. This will allow us to design compute-aware GNN architectures for specific problems, and further extend this for distributed training. In this paper, we formulate the calculation of the Floating Point Operations (FLOPs) required for a single forward pass through layers of a GNN. We report the analytical calculations for GraphConv and GraphSAGE models and compare against their profiling results for 10 graphs with varying properties. We observe that there is a strong correlation between our theoretical expectation of the number of FLOPs and the experimental execution time for a forward pass.