{"title":"通过混合子结构学习提高图神经网络的表示能力","authors":"Zhenpeng Wu , Jiamin Chen , Jianliang Gao","doi":"10.1016/j.inffus.2025.103558","DOIUrl":null,"url":null,"abstract":"<div><div>The recent trend in graph representation learning is to use Graph Neural Networks (GNNs) to approximate specific functions to capture specific graph substructures when performing aggregation, achieving stronger representational power than the 1-dimensional Weisfeiler-Leman (1-WL) graph isomorphism test. However, different graph substructures have different contributions in various scenarios, such as the clique substructure for social networks. Moreover, these methods suffer from high computational costs in capturing graph substructures, making it impractical to directly count all graph substructures when performing aggregation. Therefore, adapting the optimal graph substructure for different scenarios is an obvious challenge. To address the above challenge, we propose a simple yet effective solution, MixSL, which is flexible enough to work with any GNN backbone. Based on theoretical analysis, we offer a straightforward strategy that restricts the information of all graph substructures to the input feature space in advance, rather than the aggregation process, thereby significantly reducing computational costs. Then, we apply mixed substructure learning to all graph substructures, so that the GNN backbone can automatically learn the sample distribution of graph substructures. Without changing the GNN backbone architecture and training settings, MixSL brings a consistent and significant performance improvement on multiple graph classification benchmarks from different scenarios.</div></div>","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":"126 ","pages":"Article 103558"},"PeriodicalIF":15.5000,"publicationDate":"2025-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Improving the representational power of graph neural networks via mixed substructure learning\",\"authors\":\"Zhenpeng Wu , Jiamin Chen , Jianliang Gao\",\"doi\":\"10.1016/j.inffus.2025.103558\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>The recent trend in graph representation learning is to use Graph Neural Networks (GNNs) to approximate specific functions to capture specific graph substructures when performing aggregation, achieving stronger representational power than the 1-dimensional Weisfeiler-Leman (1-WL) graph isomorphism test. However, different graph substructures have different contributions in various scenarios, such as the clique substructure for social networks. Moreover, these methods suffer from high computational costs in capturing graph substructures, making it impractical to directly count all graph substructures when performing aggregation. Therefore, adapting the optimal graph substructure for different scenarios is an obvious challenge. To address the above challenge, we propose a simple yet effective solution, MixSL, which is flexible enough to work with any GNN backbone. Based on theoretical analysis, we offer a straightforward strategy that restricts the information of all graph substructures to the input feature space in advance, rather than the aggregation process, thereby significantly reducing computational costs. Then, we apply mixed substructure learning to all graph substructures, so that the GNN backbone can automatically learn the sample distribution of graph substructures. Without changing the GNN backbone architecture and training settings, MixSL brings a consistent and significant performance improvement on multiple graph classification benchmarks from different scenarios.</div></div>\",\"PeriodicalId\":50367,\"journal\":{\"name\":\"Information Fusion\",\"volume\":\"126 \",\"pages\":\"Article 103558\"},\"PeriodicalIF\":15.5000,\"publicationDate\":\"2025-07-26\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Information Fusion\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S156625352500630X\",\"RegionNum\":1,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Information Fusion","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S156625352500630X","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
Improving the representational power of graph neural networks via mixed substructure learning
The recent trend in graph representation learning is to use Graph Neural Networks (GNNs) to approximate specific functions to capture specific graph substructures when performing aggregation, achieving stronger representational power than the 1-dimensional Weisfeiler-Leman (1-WL) graph isomorphism test. However, different graph substructures have different contributions in various scenarios, such as the clique substructure for social networks. Moreover, these methods suffer from high computational costs in capturing graph substructures, making it impractical to directly count all graph substructures when performing aggregation. Therefore, adapting the optimal graph substructure for different scenarios is an obvious challenge. To address the above challenge, we propose a simple yet effective solution, MixSL, which is flexible enough to work with any GNN backbone. Based on theoretical analysis, we offer a straightforward strategy that restricts the information of all graph substructures to the input feature space in advance, rather than the aggregation process, thereby significantly reducing computational costs. Then, we apply mixed substructure learning to all graph substructures, so that the GNN backbone can automatically learn the sample distribution of graph substructures. Without changing the GNN backbone architecture and training settings, MixSL brings a consistent and significant performance improvement on multiple graph classification benchmarks from different scenarios.
期刊介绍:
Information Fusion serves as a central platform for showcasing advancements in multi-sensor, multi-source, multi-process information fusion, fostering collaboration among diverse disciplines driving its progress. It is the leading outlet for sharing research and development in this field, focusing on architectures, algorithms, and applications. Papers dealing with fundamental theoretical analyses as well as those demonstrating their application to real-world problems will be welcome.