{"title":"Graph convolutional network with adaptive grouping aggregation strategy","authors":"Ruixiang Wang , Chunxia Zhang , Chunhong Pan","doi":"10.1016/j.neunet.2025.108086","DOIUrl":null,"url":null,"abstract":"<div><div>The performance of graph convolutional networks (GCNs) with naive aggregation functions on nodes has reached the bottleneck, rendering a gap between practice and theoretical expressity. Some learning-based aggregation strategies have been proposed to improve the performance. However, few of them focus on how these strategies affect the expressity and evaluate their performance in an equal experimental setting. In this paper, we point out that the generated features lack discrimination because naive aggregation functions cannot retain sufficient node information, largely leading to the performance gap. Accordingly, a novel Adaptive Grouping Aggregation (AGA) strategy is proposed to remedy this drawback. Inspired by the label histogram in the Weisfeiler-Lehman (WL) Test, this strategy assigns each node to a unique group to retain more node information, which is proven to have a strictly more powerful expressity. In this work setting, the nodes are grouped according to a modified Student’s t-Distribution between node features and a set of learnable group labels, where the Gumbel Softmax is employed to implement this strategy in an end-to-end trainable pipeline. As a result, such a design can generate more discriminative features and offer a plug-in module in most architectures. Extensive experiments have been conducted on several benchmarks to compare our method with other aggregation strategies. The proposed method improves the performance in all control groups of all benchmarks and achieves the best result in most cases. Additional ablation studies and comparisons with state-of-the-art methods on the large-scale benchmark also indicate the superiority of our method.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"194 ","pages":"Article 108086"},"PeriodicalIF":6.3000,"publicationDate":"2025-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neural Networks","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0893608025009669","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
The performance of graph convolutional networks (GCNs) with naive aggregation functions on nodes has reached the bottleneck, rendering a gap between practice and theoretical expressity. Some learning-based aggregation strategies have been proposed to improve the performance. However, few of them focus on how these strategies affect the expressity and evaluate their performance in an equal experimental setting. In this paper, we point out that the generated features lack discrimination because naive aggregation functions cannot retain sufficient node information, largely leading to the performance gap. Accordingly, a novel Adaptive Grouping Aggregation (AGA) strategy is proposed to remedy this drawback. Inspired by the label histogram in the Weisfeiler-Lehman (WL) Test, this strategy assigns each node to a unique group to retain more node information, which is proven to have a strictly more powerful expressity. In this work setting, the nodes are grouped according to a modified Student’s t-Distribution between node features and a set of learnable group labels, where the Gumbel Softmax is employed to implement this strategy in an end-to-end trainable pipeline. As a result, such a design can generate more discriminative features and offer a plug-in module in most architectures. Extensive experiments have been conducted on several benchmarks to compare our method with other aggregation strategies. The proposed method improves the performance in all control groups of all benchmarks and achieves the best result in most cases. Additional ablation studies and comparisons with state-of-the-art methods on the large-scale benchmark also indicate the superiority of our method.
期刊介绍:
Neural Networks is a platform that aims to foster an international community of scholars and practitioners interested in neural networks, deep learning, and other approaches to artificial intelligence and machine learning. Our journal invites submissions covering various aspects of neural networks research, from computational neuroscience and cognitive modeling to mathematical analyses and engineering applications. By providing a forum for interdisciplinary discussions between biology and technology, we aim to encourage the development of biologically-inspired artificial intelligence.