{"title":"On the distribution alignment of propagation in graph neural networks","authors":"Qinkai Zheng , Xiao Xia , Kun Zhang , Evgeny Kharlamov , Yuxiao Dong","doi":"10.1016/j.aiopen.2022.11.006","DOIUrl":null,"url":null,"abstract":"<div><p>Graph neural networks (GNNs) have been widely adopted for modeling graph-structure data. Most existing GNN studies have focused on designing <em>different</em> strategies to propagate information over the graph structures. After systematic investigations, we observe that the propagation step in GNNs matters, but its resultant performance improvement is insensitive to the location where we apply it. Our empirical examination further shows that the performance improvement brought by propagation mostly comes from a phenomenon of <em>distribution alignment</em>, i.e., propagation over graphs actually results in the alignment of the underlying distributions between the training and test sets. The findings are instrumental to understand GNNs, e.g., why decoupled GNNs can work as good as standard GNNs.<span><sup>1</sup></span></p></div>","PeriodicalId":100068,"journal":{"name":"AI Open","volume":"3 ","pages":"Pages 218-228"},"PeriodicalIF":0.0000,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666651022000213/pdfft?md5=e78f6562530f06a112827f05883082be&pid=1-s2.0-S2666651022000213-main.pdf","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"AI Open","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2666651022000213","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Graph neural networks (GNNs) have been widely adopted for modeling graph-structure data. Most existing GNN studies have focused on designing different strategies to propagate information over the graph structures. After systematic investigations, we observe that the propagation step in GNNs matters, but its resultant performance improvement is insensitive to the location where we apply it. Our empirical examination further shows that the performance improvement brought by propagation mostly comes from a phenomenon of distribution alignment, i.e., propagation over graphs actually results in the alignment of the underlying distributions between the training and test sets. The findings are instrumental to understand GNNs, e.g., why decoupled GNNs can work as good as standard GNNs.1