基于多面图信息瓶颈的权衡图结构学习。

IF 6.3 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Shuangjie Li , Baoming Zhang , Jianqing Song , Gaoli Ruan , Chongjun Wang , Junyuan Xie
{"title":"基于多面图信息瓶颈的权衡图结构学习。","authors":"Shuangjie Li ,&nbsp;Baoming Zhang ,&nbsp;Jianqing Song ,&nbsp;Gaoli Ruan ,&nbsp;Chongjun Wang ,&nbsp;Junyuan Xie","doi":"10.1016/j.neunet.2025.108125","DOIUrl":null,"url":null,"abstract":"<div><div>Graph neural networks (GNNs) are prominent for their effectiveness in processing graph-structured data for semi-supervised node classification tasks. Most existing GNNs perform message passing directly based on the observed graph structure. However, in real-world scenarios, the observed structure is often suboptimal due to multiple factors, significantly degrading the performance of GNNs. To address this challenge, we first conduct an empirical analysis showing that different graph structures significantly impact empirical risk and classification performance. Motivated by our observations, we propose a novel method named <strong>T</strong>rade-off <strong>G</strong>raph <strong>S</strong>tructure <strong>L</strong>earning (TGSL), guided by the multifaceted Graph Information Bottleneck (GIB) principle based on Mutual Information (MI). The key idea behind TGSL is to learn a minimal sufficient graph structure that minimizes empirical risk while maintaining performance. Specifically, we introduce global feature augmentation to capture the structural roles of nodes, and global structure augmentation to uncover global relationships between nodes. The augmented graphs are then processed by structure estimators with different parameters for refinement and redefinition, respectively. Additionally, we innovatively leverage multifaceted GIB as the optimization objective by maximizing the MI between the labels and the representation derived from the final structure, while constraining the MI between this representation and that based on the redefined structures. This trade-off helps avoid capturing irrelevant information from the redefined structures and enhances the final representation for node classification. We conduct extensive experiments across a range of datasets under clean and attacked conditions. The results demonstrate the outstanding performance and robustness of TGSL over state-of-the-art baselines.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"194 ","pages":"Article 108125"},"PeriodicalIF":6.3000,"publicationDate":"2025-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"TGSL: Trade-off graph structure learning via multifaceted graph information bottleneck\",\"authors\":\"Shuangjie Li ,&nbsp;Baoming Zhang ,&nbsp;Jianqing Song ,&nbsp;Gaoli Ruan ,&nbsp;Chongjun Wang ,&nbsp;Junyuan Xie\",\"doi\":\"10.1016/j.neunet.2025.108125\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Graph neural networks (GNNs) are prominent for their effectiveness in processing graph-structured data for semi-supervised node classification tasks. Most existing GNNs perform message passing directly based on the observed graph structure. However, in real-world scenarios, the observed structure is often suboptimal due to multiple factors, significantly degrading the performance of GNNs. To address this challenge, we first conduct an empirical analysis showing that different graph structures significantly impact empirical risk and classification performance. Motivated by our observations, we propose a novel method named <strong>T</strong>rade-off <strong>G</strong>raph <strong>S</strong>tructure <strong>L</strong>earning (TGSL), guided by the multifaceted Graph Information Bottleneck (GIB) principle based on Mutual Information (MI). The key idea behind TGSL is to learn a minimal sufficient graph structure that minimizes empirical risk while maintaining performance. Specifically, we introduce global feature augmentation to capture the structural roles of nodes, and global structure augmentation to uncover global relationships between nodes. The augmented graphs are then processed by structure estimators with different parameters for refinement and redefinition, respectively. Additionally, we innovatively leverage multifaceted GIB as the optimization objective by maximizing the MI between the labels and the representation derived from the final structure, while constraining the MI between this representation and that based on the redefined structures. This trade-off helps avoid capturing irrelevant information from the redefined structures and enhances the final representation for node classification. We conduct extensive experiments across a range of datasets under clean and attacked conditions. The results demonstrate the outstanding performance and robustness of TGSL over state-of-the-art baselines.</div></div>\",\"PeriodicalId\":49763,\"journal\":{\"name\":\"Neural Networks\",\"volume\":\"194 \",\"pages\":\"Article 108125\"},\"PeriodicalIF\":6.3000,\"publicationDate\":\"2025-09-18\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Neural Networks\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0893608025010056\",\"RegionNum\":1,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neural Networks","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0893608025010056","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

摘要

图神经网络(gnn)在半监督节点分类任务中处理图结构数据的有效性突出。大多数现有gnn直接基于观察到的图结构执行消息传递。然而,在现实场景中,由于多种因素,观察到的结构往往不是最优的,这大大降低了gnn的性能。为了解决这一挑战,我们首先进行了实证分析,表明不同的图结构显著影响经验风险和分类性能。在此基础上,本文提出了一种基于互信息(MI)的多面图信息瓶颈(GIB)原理的权衡图结构学习(TGSL)方法。TGSL背后的关键思想是学习一个最小化的足够图结构,在保持性能的同时最小化经验风险。具体来说,我们引入了全局特征增强来捕捉节点的结构角色,以及全局结构增强来揭示节点之间的全局关系。然后用不同参数的结构估计器分别对增广图进行细化和重新定义。此外,我们创新地利用了多面语义语义作为优化目标,通过最大化标签与最终结构派生的表示之间的语义语义,同时约束该表示与基于重新定义结构的表示之间的语义语义。这种权衡有助于避免从重新定义的结构中捕获不相关的信息,并增强节点分类的最终表示。我们在干净和受攻击的条件下对一系列数据集进行了广泛的实验。结果表明,TGSL在最先进的基线上具有出色的性能和鲁棒性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
TGSL: Trade-off graph structure learning via multifaceted graph information bottleneck
Graph neural networks (GNNs) are prominent for their effectiveness in processing graph-structured data for semi-supervised node classification tasks. Most existing GNNs perform message passing directly based on the observed graph structure. However, in real-world scenarios, the observed structure is often suboptimal due to multiple factors, significantly degrading the performance of GNNs. To address this challenge, we first conduct an empirical analysis showing that different graph structures significantly impact empirical risk and classification performance. Motivated by our observations, we propose a novel method named Trade-off Graph Structure Learning (TGSL), guided by the multifaceted Graph Information Bottleneck (GIB) principle based on Mutual Information (MI). The key idea behind TGSL is to learn a minimal sufficient graph structure that minimizes empirical risk while maintaining performance. Specifically, we introduce global feature augmentation to capture the structural roles of nodes, and global structure augmentation to uncover global relationships between nodes. The augmented graphs are then processed by structure estimators with different parameters for refinement and redefinition, respectively. Additionally, we innovatively leverage multifaceted GIB as the optimization objective by maximizing the MI between the labels and the representation derived from the final structure, while constraining the MI between this representation and that based on the redefined structures. This trade-off helps avoid capturing irrelevant information from the redefined structures and enhances the final representation for node classification. We conduct extensive experiments across a range of datasets under clean and attacked conditions. The results demonstrate the outstanding performance and robustness of TGSL over state-of-the-art baselines.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Neural Networks
Neural Networks 工程技术-计算机:人工智能
CiteScore
13.90
自引率
7.70%
发文量
425
审稿时长
67 days
期刊介绍: Neural Networks is a platform that aims to foster an international community of scholars and practitioners interested in neural networks, deep learning, and other approaches to artificial intelligence and machine learning. Our journal invites submissions covering various aspects of neural networks research, from computational neuroscience and cognitive modeling to mathematical analyses and engineering applications. By providing a forum for interdisciplinary discussions between biology and technology, we aim to encourage the development of biologically-inspired artificial intelligence.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信