Chenyu Zhou , Yabin Peng , Wei Huang , Xinyuan Miao , Yi Cao , Xinghao Wang , Xianglong Kong
{"title":"鲁棒图神经网络的动态集成学习模型","authors":"Chenyu Zhou , Yabin Peng , Wei Huang , Xinyuan Miao , Yi Cao , Xinghao Wang , Xianglong Kong","doi":"10.1016/j.neunet.2025.107810","DOIUrl":null,"url":null,"abstract":"<div><div>Recent studies have shown that Graph Neural Networks (GNNs) are vulnerable to adversarial attacks. While various defense models have been proposed, they often fail to account for the variability in both data and attacks, limiting their effectiveness in dynamic environments. Therefore, we propose DERG, a dynamic ensemble learning model for robust GNNs, which leverages multiple graph data and dynamically changing submodels for defense. Specifically, we first propose the graph sampling strategy to purify the perturbed graph, and generate multiple subgraphs to simulate the various potential variations that may occur in the graph. Then, we propose the mutual information-based diversity enhancement strategy to increase the variability among submodels, ensuring that each submodel focuses on a distinct defense direction and avoids being deceived by the same attack. Finally, we propose the game theory-based decision strategy to dynamically assign weights to submodels, with the goal of selecting the optimal submodels for different scenarios and adapting to the changing environment. Experiments on widely used datasets demonstrate that DERG exhibits significant robustness against a wide range of attacks, including graph modification attacks, backdoor poisoning attacks, and double attacks.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"191 ","pages":"Article 107810"},"PeriodicalIF":6.3000,"publicationDate":"2025-07-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A dynamic ensemble learning model for robust Graph Neural Networks\",\"authors\":\"Chenyu Zhou , Yabin Peng , Wei Huang , Xinyuan Miao , Yi Cao , Xinghao Wang , Xianglong Kong\",\"doi\":\"10.1016/j.neunet.2025.107810\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Recent studies have shown that Graph Neural Networks (GNNs) are vulnerable to adversarial attacks. While various defense models have been proposed, they often fail to account for the variability in both data and attacks, limiting their effectiveness in dynamic environments. Therefore, we propose DERG, a dynamic ensemble learning model for robust GNNs, which leverages multiple graph data and dynamically changing submodels for defense. Specifically, we first propose the graph sampling strategy to purify the perturbed graph, and generate multiple subgraphs to simulate the various potential variations that may occur in the graph. Then, we propose the mutual information-based diversity enhancement strategy to increase the variability among submodels, ensuring that each submodel focuses on a distinct defense direction and avoids being deceived by the same attack. Finally, we propose the game theory-based decision strategy to dynamically assign weights to submodels, with the goal of selecting the optimal submodels for different scenarios and adapting to the changing environment. Experiments on widely used datasets demonstrate that DERG exhibits significant robustness against a wide range of attacks, including graph modification attacks, backdoor poisoning attacks, and double attacks.</div></div>\",\"PeriodicalId\":49763,\"journal\":{\"name\":\"Neural Networks\",\"volume\":\"191 \",\"pages\":\"Article 107810\"},\"PeriodicalIF\":6.3000,\"publicationDate\":\"2025-07-07\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Neural Networks\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0893608025006902\",\"RegionNum\":1,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neural Networks","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0893608025006902","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
A dynamic ensemble learning model for robust Graph Neural Networks
Recent studies have shown that Graph Neural Networks (GNNs) are vulnerable to adversarial attacks. While various defense models have been proposed, they often fail to account for the variability in both data and attacks, limiting their effectiveness in dynamic environments. Therefore, we propose DERG, a dynamic ensemble learning model for robust GNNs, which leverages multiple graph data and dynamically changing submodels for defense. Specifically, we first propose the graph sampling strategy to purify the perturbed graph, and generate multiple subgraphs to simulate the various potential variations that may occur in the graph. Then, we propose the mutual information-based diversity enhancement strategy to increase the variability among submodels, ensuring that each submodel focuses on a distinct defense direction and avoids being deceived by the same attack. Finally, we propose the game theory-based decision strategy to dynamically assign weights to submodels, with the goal of selecting the optimal submodels for different scenarios and adapting to the changing environment. Experiments on widely used datasets demonstrate that DERG exhibits significant robustness against a wide range of attacks, including graph modification attacks, backdoor poisoning attacks, and double attacks.
期刊介绍:
Neural Networks is a platform that aims to foster an international community of scholars and practitioners interested in neural networks, deep learning, and other approaches to artificial intelligence and machine learning. Our journal invites submissions covering various aspects of neural networks research, from computational neuroscience and cognitive modeling to mathematical analyses and engineering applications. By providing a forum for interdisciplinary discussions between biology and technology, we aim to encourage the development of biologically-inspired artificial intelligence.