鲁棒图神经网络的动态集成学习模型

IF 6.3 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Chenyu Zhou , Yabin Peng , Wei Huang , Xinyuan Miao , Yi Cao , Xinghao Wang , Xianglong Kong
{"title":"鲁棒图神经网络的动态集成学习模型","authors":"Chenyu Zhou ,&nbsp;Yabin Peng ,&nbsp;Wei Huang ,&nbsp;Xinyuan Miao ,&nbsp;Yi Cao ,&nbsp;Xinghao Wang ,&nbsp;Xianglong Kong","doi":"10.1016/j.neunet.2025.107810","DOIUrl":null,"url":null,"abstract":"<div><div>Recent studies have shown that Graph Neural Networks (GNNs) are vulnerable to adversarial attacks. While various defense models have been proposed, they often fail to account for the variability in both data and attacks, limiting their effectiveness in dynamic environments. Therefore, we propose DERG, a dynamic ensemble learning model for robust GNNs, which leverages multiple graph data and dynamically changing submodels for defense. Specifically, we first propose the graph sampling strategy to purify the perturbed graph, and generate multiple subgraphs to simulate the various potential variations that may occur in the graph. Then, we propose the mutual information-based diversity enhancement strategy to increase the variability among submodels, ensuring that each submodel focuses on a distinct defense direction and avoids being deceived by the same attack. Finally, we propose the game theory-based decision strategy to dynamically assign weights to submodels, with the goal of selecting the optimal submodels for different scenarios and adapting to the changing environment. Experiments on widely used datasets demonstrate that DERG exhibits significant robustness against a wide range of attacks, including graph modification attacks, backdoor poisoning attacks, and double attacks.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"191 ","pages":"Article 107810"},"PeriodicalIF":6.3000,"publicationDate":"2025-07-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A dynamic ensemble learning model for robust Graph Neural Networks\",\"authors\":\"Chenyu Zhou ,&nbsp;Yabin Peng ,&nbsp;Wei Huang ,&nbsp;Xinyuan Miao ,&nbsp;Yi Cao ,&nbsp;Xinghao Wang ,&nbsp;Xianglong Kong\",\"doi\":\"10.1016/j.neunet.2025.107810\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Recent studies have shown that Graph Neural Networks (GNNs) are vulnerable to adversarial attacks. While various defense models have been proposed, they often fail to account for the variability in both data and attacks, limiting their effectiveness in dynamic environments. Therefore, we propose DERG, a dynamic ensemble learning model for robust GNNs, which leverages multiple graph data and dynamically changing submodels for defense. Specifically, we first propose the graph sampling strategy to purify the perturbed graph, and generate multiple subgraphs to simulate the various potential variations that may occur in the graph. Then, we propose the mutual information-based diversity enhancement strategy to increase the variability among submodels, ensuring that each submodel focuses on a distinct defense direction and avoids being deceived by the same attack. Finally, we propose the game theory-based decision strategy to dynamically assign weights to submodels, with the goal of selecting the optimal submodels for different scenarios and adapting to the changing environment. Experiments on widely used datasets demonstrate that DERG exhibits significant robustness against a wide range of attacks, including graph modification attacks, backdoor poisoning attacks, and double attacks.</div></div>\",\"PeriodicalId\":49763,\"journal\":{\"name\":\"Neural Networks\",\"volume\":\"191 \",\"pages\":\"Article 107810\"},\"PeriodicalIF\":6.3000,\"publicationDate\":\"2025-07-07\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Neural Networks\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0893608025006902\",\"RegionNum\":1,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neural Networks","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0893608025006902","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

摘要

最近的研究表明,图神经网络(gnn)容易受到对抗性攻击。虽然已经提出了各种防御模型,但它们往往无法考虑数据和攻击的可变性,从而限制了它们在动态环境中的有效性。因此,我们提出了一种用于鲁棒gnn的动态集成学习模型DERG,该模型利用多图数据和动态变化的子模型进行防御。具体而言,我们首先提出了图采样策略来净化扰动图,并生成多个子图来模拟图中可能出现的各种潜在变化。然后,我们提出了基于互信息的多样性增强策略,以增加子模型之间的可变性,确保每个子模型专注于不同的防御方向,避免被相同的攻击欺骗。最后,我们提出了基于博弈论的决策策略来动态分配子模型的权重,目的是在不同的场景下选择最优的子模型,以适应不断变化的环境。在广泛使用的数据集上的实验表明,DERG对各种攻击(包括图修改攻击、后门中毒攻击和双重攻击)表现出显著的鲁棒性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
A dynamic ensemble learning model for robust Graph Neural Networks
Recent studies have shown that Graph Neural Networks (GNNs) are vulnerable to adversarial attacks. While various defense models have been proposed, they often fail to account for the variability in both data and attacks, limiting their effectiveness in dynamic environments. Therefore, we propose DERG, a dynamic ensemble learning model for robust GNNs, which leverages multiple graph data and dynamically changing submodels for defense. Specifically, we first propose the graph sampling strategy to purify the perturbed graph, and generate multiple subgraphs to simulate the various potential variations that may occur in the graph. Then, we propose the mutual information-based diversity enhancement strategy to increase the variability among submodels, ensuring that each submodel focuses on a distinct defense direction and avoids being deceived by the same attack. Finally, we propose the game theory-based decision strategy to dynamically assign weights to submodels, with the goal of selecting the optimal submodels for different scenarios and adapting to the changing environment. Experiments on widely used datasets demonstrate that DERG exhibits significant robustness against a wide range of attacks, including graph modification attacks, backdoor poisoning attacks, and double attacks.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Neural Networks
Neural Networks 工程技术-计算机:人工智能
CiteScore
13.90
自引率
7.70%
发文量
425
审稿时长
67 days
期刊介绍: Neural Networks is a platform that aims to foster an international community of scholars and practitioners interested in neural networks, deep learning, and other approaches to artificial intelligence and machine learning. Our journal invites submissions covering various aspects of neural networks research, from computational neuroscience and cognitive modeling to mathematical analyses and engineering applications. By providing a forum for interdisciplinary discussions between biology and technology, we aim to encourage the development of biologically-inspired artificial intelligence.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信