Dongdong An , Yi Yang , Xin Gao , Hongda Qi , Yang Yang , Xin Ye , Maozhen Li , Qin Zhao
{"title":"基于强化学习的图神经网络对抗性防御安全训练","authors":"Dongdong An , Yi Yang , Xin Gao , Hongda Qi , Yang Yang , Xin Ye , Maozhen Li , Qin Zhao","doi":"10.1016/j.neucom.2025.129704","DOIUrl":null,"url":null,"abstract":"<div><div>The security of Graph Neural Networks (GNNs) is crucial for ensuring the reliability and protection of the systems they are integrated within real-world applications. However, current approaches lack the ability to prevent GNNs from learning high-risk information, including edges, nodes, convolutions, etc. In this paper, we propose a secure GNN learning framework called Reinforcement Learning-based Secure Training Algorithm. We first introduce a model conversion technique that transforms the training process of GNNs into a verifiable Markov Decision Process model. To maintain the security of model we employ Deep Q-Learning algorithm to prevent high-risk information messages. Additionally, to verify whether the strategy derived from Deep Q-Learning algorithm meets safety requirements, we design a model transformation algorithm that converts MDPs into probabilistic verification models, thereby ensuring our method’s security through formal verification tools. The effectiveness and feasibility of our proposed method are demonstrated by achieving a 6.4% improvement in average accuracy on open-source datasets under adversarial attack graphs.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"630 ","pages":"Article 129704"},"PeriodicalIF":6.5000,"publicationDate":"2025-02-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Reinforcement learning-based secure training for adversarial defense in graph neural networks\",\"authors\":\"Dongdong An , Yi Yang , Xin Gao , Hongda Qi , Yang Yang , Xin Ye , Maozhen Li , Qin Zhao\",\"doi\":\"10.1016/j.neucom.2025.129704\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>The security of Graph Neural Networks (GNNs) is crucial for ensuring the reliability and protection of the systems they are integrated within real-world applications. However, current approaches lack the ability to prevent GNNs from learning high-risk information, including edges, nodes, convolutions, etc. In this paper, we propose a secure GNN learning framework called Reinforcement Learning-based Secure Training Algorithm. We first introduce a model conversion technique that transforms the training process of GNNs into a verifiable Markov Decision Process model. To maintain the security of model we employ Deep Q-Learning algorithm to prevent high-risk information messages. Additionally, to verify whether the strategy derived from Deep Q-Learning algorithm meets safety requirements, we design a model transformation algorithm that converts MDPs into probabilistic verification models, thereby ensuring our method’s security through formal verification tools. The effectiveness and feasibility of our proposed method are demonstrated by achieving a 6.4% improvement in average accuracy on open-source datasets under adversarial attack graphs.</div></div>\",\"PeriodicalId\":19268,\"journal\":{\"name\":\"Neurocomputing\",\"volume\":\"630 \",\"pages\":\"Article 129704\"},\"PeriodicalIF\":6.5000,\"publicationDate\":\"2025-02-20\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Neurocomputing\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0925231225003765\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neurocomputing","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0925231225003765","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
Reinforcement learning-based secure training for adversarial defense in graph neural networks
The security of Graph Neural Networks (GNNs) is crucial for ensuring the reliability and protection of the systems they are integrated within real-world applications. However, current approaches lack the ability to prevent GNNs from learning high-risk information, including edges, nodes, convolutions, etc. In this paper, we propose a secure GNN learning framework called Reinforcement Learning-based Secure Training Algorithm. We first introduce a model conversion technique that transforms the training process of GNNs into a verifiable Markov Decision Process model. To maintain the security of model we employ Deep Q-Learning algorithm to prevent high-risk information messages. Additionally, to verify whether the strategy derived from Deep Q-Learning algorithm meets safety requirements, we design a model transformation algorithm that converts MDPs into probabilistic verification models, thereby ensuring our method’s security through formal verification tools. The effectiveness and feasibility of our proposed method are demonstrated by achieving a 6.4% improvement in average accuracy on open-source datasets under adversarial attack graphs.
期刊介绍:
Neurocomputing publishes articles describing recent fundamental contributions in the field of neurocomputing. Neurocomputing theory, practice and applications are the essential topics being covered.