Rende Hong , Kaibiao Lin , Binsheng Hong , Zhaori Guo , Fan Yang
{"title":"Multi-Scale Sub-graph View Generation and Siamese Contrastive Learning for Graph Representations","authors":"Rende Hong , Kaibiao Lin , Binsheng Hong , Zhaori Guo , Fan Yang","doi":"10.1016/j.asoc.2025.113608","DOIUrl":null,"url":null,"abstract":"<div><div>Graph Contrastive Learning (GCL) is an essential technique in extracting structural and node-related information in graph representation learning. Most existing GCL methods rely on data augmentation to generate multiple views of a graph, aiming to maintain consistency across them via contrastive learning. However, these approaches usually have two limitations: (1) the views generated by the random perturb strategy often disrupt the critical information of the graph, and (2) the graph contrastive strategy is challenging to comprehensively construct contrastive samples between views. To address the challenges mentioned above, we propose an innovative GCL method called the Multi-Scale Sub-graph View Generation and Siamese Contrastive Learning for Graph Representations method (M3SGCL), which consists of three modules. First, the view generation module generates two novel augmented views by introducing multiple structure views and sampled sub-graph sets, which prevents the original graph structure from being damaged, providing a deep understanding of global graph information. Second, the Siamese Network module processes multiple sub-graph views using an online encoder and a target encoder, generating multi-scale representations that enrich the selection of high-quality positive and negative sample pairs for contrastive learning. Third, to further reduce the risk of the information loss and incomplete sample construction, the contrastive learning module establishes multiple contrastive paths through the Siamese Network and employs a multi-scale loss function to learn robust and informative representations. In addition, we perform comprehensive experiments on five real-world datasets, and the results show that M3SGCL significantly outperforms ten state-of-the-art baselines, especially achieving an improvement of 19.76% compared to the second-best method on the Wisconsin dataset. These results demonstrate that our method effectively captures more nuanced and informative graph information by constructing subgraph views and introducing an enhanced multi-scale comparison strategy.</div></div>","PeriodicalId":50737,"journal":{"name":"Applied Soft Computing","volume":"182 ","pages":"Article 113608"},"PeriodicalIF":7.2000,"publicationDate":"2025-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Applied Soft Computing","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1568494625009196","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Graph Contrastive Learning (GCL) is an essential technique in extracting structural and node-related information in graph representation learning. Most existing GCL methods rely on data augmentation to generate multiple views of a graph, aiming to maintain consistency across them via contrastive learning. However, these approaches usually have two limitations: (1) the views generated by the random perturb strategy often disrupt the critical information of the graph, and (2) the graph contrastive strategy is challenging to comprehensively construct contrastive samples between views. To address the challenges mentioned above, we propose an innovative GCL method called the Multi-Scale Sub-graph View Generation and Siamese Contrastive Learning for Graph Representations method (M3SGCL), which consists of three modules. First, the view generation module generates two novel augmented views by introducing multiple structure views and sampled sub-graph sets, which prevents the original graph structure from being damaged, providing a deep understanding of global graph information. Second, the Siamese Network module processes multiple sub-graph views using an online encoder and a target encoder, generating multi-scale representations that enrich the selection of high-quality positive and negative sample pairs for contrastive learning. Third, to further reduce the risk of the information loss and incomplete sample construction, the contrastive learning module establishes multiple contrastive paths through the Siamese Network and employs a multi-scale loss function to learn robust and informative representations. In addition, we perform comprehensive experiments on five real-world datasets, and the results show that M3SGCL significantly outperforms ten state-of-the-art baselines, especially achieving an improvement of 19.76% compared to the second-best method on the Wisconsin dataset. These results demonstrate that our method effectively captures more nuanced and informative graph information by constructing subgraph views and introducing an enhanced multi-scale comparison strategy.
期刊介绍:
Applied Soft Computing is an international journal promoting an integrated view of soft computing to solve real life problems.The focus is to publish the highest quality research in application and convergence of the areas of Fuzzy Logic, Neural Networks, Evolutionary Computing, Rough Sets and other similar techniques to address real world complexities.
Applied Soft Computing is a rolling publication: articles are published as soon as the editor-in-chief has accepted them. Therefore, the web site will continuously be updated with new articles and the publication time will be short.