{"title":"Semi-Supervised Node Classification via Semi-Global Graph Transformer Based on Homogeneity Augmentation","authors":"Jin Li, Yisong Huang, Xinlong Chen, Yanglan Fu","doi":"10.1142/s012962642340008x","DOIUrl":null,"url":null,"abstract":"As a kind of generalization of Transformers in the graph domain, Global Graph Transformers are good at learning distant knowledge by directly doing information interactions on complete graphs, which differs from Local Graph Transformers interacting on the original structures. However, we find that most prior works focus only on graph-level tasks (e.g., graph classification) and few Graph Transformer models can effectively solve node-level tasks, especially semi-supervised node classification, which obviously has important practical significance due to the limitation and expensiveness of these node labels. In order to fill this gap, this paper first summarizes the theoretical advantages of Graph Transformers. And based on some exploring experiments, we give some discussions on the main cause of their poor practical performance in semi-supervised node classifications. Secondly, based on this analysis, we design a three-stage homogeneity augmentation framework and propose a Semi-Global Graph Transformer. Considering both global and local perspectives, the proposed model combines various technologies including self-distillation, pseudo-label filtering, pre-training and fine-tuning, and metric learning. Furthermore, it simultaneously enhances the structure and the optimization, improving its effectiveness, scalability, and generalizability. Finally, extensive experiments on seven public homogeneous and heterophilous graph benchmarks show that the proposed method can achieve competitive or much better results compared to many baseline models including state-of-the-arts.","PeriodicalId":422436,"journal":{"name":"Parallel Process. Lett.","volume":"2011 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-05-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Parallel Process. Lett.","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1142/s012962642340008x","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
As a kind of generalization of Transformers in the graph domain, Global Graph Transformers are good at learning distant knowledge by directly doing information interactions on complete graphs, which differs from Local Graph Transformers interacting on the original structures. However, we find that most prior works focus only on graph-level tasks (e.g., graph classification) and few Graph Transformer models can effectively solve node-level tasks, especially semi-supervised node classification, which obviously has important practical significance due to the limitation and expensiveness of these node labels. In order to fill this gap, this paper first summarizes the theoretical advantages of Graph Transformers. And based on some exploring experiments, we give some discussions on the main cause of their poor practical performance in semi-supervised node classifications. Secondly, based on this analysis, we design a three-stage homogeneity augmentation framework and propose a Semi-Global Graph Transformer. Considering both global and local perspectives, the proposed model combines various technologies including self-distillation, pseudo-label filtering, pre-training and fine-tuning, and metric learning. Furthermore, it simultaneously enhances the structure and the optimization, improving its effectiveness, scalability, and generalizability. Finally, extensive experiments on seven public homogeneous and heterophilous graph benchmarks show that the proposed method can achieve competitive or much better results compared to many baseline models including state-of-the-arts.