{"title":"基于相似性的图神经网络对抗性攻击防御","authors":"Minghong Yao, Haizheng Yu, H. Bian","doi":"10.3233/aic-220120","DOIUrl":null,"url":null,"abstract":"Graph Neural Networks (GNNs) are powerful tools in graph application areas. However, recent studies indicate that GNNs are vulnerable to adversarial attacks, which can lead GNNs to easily make wrong predictions for downstream tasks. A number of works aim to solve this problem but what criteria we should follow to clean the perturbed graph is still a challenge. In this paper, we propose GSP-GNN, a general framework to defend against massive poisoning attacks that can perturb graphs. The vital principle of GSP-GNN is to explore the similarity property to mitigate negative effects on graphs. Specifically, this method prunes adversarial edges by the similarity of node feature and graph structure to eliminate adversarial perturbations. In order to stabilize and enhance GNNs training process, previous layer information is adopted in case a large number of edges are pruned in one layer. Extensive experiments on three real-world graphs demonstrate that GSP-GNN achieves significantly better performance compared with the representative baselines and has favorable generalization ability simultaneously.","PeriodicalId":50835,"journal":{"name":"AI Communications","volume":"25 1","pages":"27-39"},"PeriodicalIF":1.4000,"publicationDate":"2023-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Defending against adversarial attacks on graph neural networks via similarity property\",\"authors\":\"Minghong Yao, Haizheng Yu, H. Bian\",\"doi\":\"10.3233/aic-220120\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Graph Neural Networks (GNNs) are powerful tools in graph application areas. However, recent studies indicate that GNNs are vulnerable to adversarial attacks, which can lead GNNs to easily make wrong predictions for downstream tasks. A number of works aim to solve this problem but what criteria we should follow to clean the perturbed graph is still a challenge. In this paper, we propose GSP-GNN, a general framework to defend against massive poisoning attacks that can perturb graphs. The vital principle of GSP-GNN is to explore the similarity property to mitigate negative effects on graphs. Specifically, this method prunes adversarial edges by the similarity of node feature and graph structure to eliminate adversarial perturbations. In order to stabilize and enhance GNNs training process, previous layer information is adopted in case a large number of edges are pruned in one layer. Extensive experiments on three real-world graphs demonstrate that GSP-GNN achieves significantly better performance compared with the representative baselines and has favorable generalization ability simultaneously.\",\"PeriodicalId\":50835,\"journal\":{\"name\":\"AI Communications\",\"volume\":\"25 1\",\"pages\":\"27-39\"},\"PeriodicalIF\":1.4000,\"publicationDate\":\"2023-01-23\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"AI Communications\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://doi.org/10.3233/aic-220120\",\"RegionNum\":4,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q4\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"AI Communications","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.3233/aic-220120","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
Defending against adversarial attacks on graph neural networks via similarity property
Graph Neural Networks (GNNs) are powerful tools in graph application areas. However, recent studies indicate that GNNs are vulnerable to adversarial attacks, which can lead GNNs to easily make wrong predictions for downstream tasks. A number of works aim to solve this problem but what criteria we should follow to clean the perturbed graph is still a challenge. In this paper, we propose GSP-GNN, a general framework to defend against massive poisoning attacks that can perturb graphs. The vital principle of GSP-GNN is to explore the similarity property to mitigate negative effects on graphs. Specifically, this method prunes adversarial edges by the similarity of node feature and graph structure to eliminate adversarial perturbations. In order to stabilize and enhance GNNs training process, previous layer information is adopted in case a large number of edges are pruned in one layer. Extensive experiments on three real-world graphs demonstrate that GSP-GNN achieves significantly better performance compared with the representative baselines and has favorable generalization ability simultaneously.
期刊介绍:
AI Communications is a journal on artificial intelligence (AI) which has a close relationship to EurAI (European Association for Artificial Intelligence, formerly ECCAI). It covers the whole AI community: Scientific institutions as well as commercial and industrial companies.
AI Communications aims to enhance contacts and information exchange between AI researchers and developers, and to provide supranational information to those concerned with AI and advanced information processing. AI Communications publishes refereed articles concerning scientific and technical AI procedures, provided they are of sufficient interest to a large readership of both scientific and practical background. In addition it contains high-level background material, both at the technical level as well as the level of opinions, policies and news.