Haoran Li, Jinhong Zhang, Song Gao, Liwen Wu, Wei Zhou, Ruxin Wang
{"title":"Towards Query-limited Adversarial Attacks on Graph Neural Networks","authors":"Haoran Li, Jinhong Zhang, Song Gao, Liwen Wu, Wei Zhou, Ruxin Wang","doi":"10.1109/ICTAI56018.2022.00082","DOIUrl":null,"url":null,"abstract":"Graph Neural Network (GNN) is a graph representation learning approach for graph-structured data, which has witnessed a remarkable progress in the past few years. As a counterpart, the robustness of such a model has also received considerable attention. Previous studies show that the performance of a well-trained GNN can be faded by black-box adversarial examples significantly. In practice, the attacker can only query the target model with very limited counts, yet the existing methods require hundreds of thousand queries to extend attacks, leading the attacker to be exposed easily. To perform a step forward in addressing this issue, in this paper, we propose a novel attack methods, namely Graph Query-limited Attack (GQA), in which we generate adversarial examples on the surrogate model to fool the target model. Specifically, in GQA, we use contrastive learning to fit the feature extraction layers of the surrogate model in a query-free manner, which can reduce the need of queries. Furthermore, in order to utilize query results sufficiently, we obtain a series of queries with rich information by changing the input iteratively, and storing them in a buffer for recycling usage. Experiments show that GQA can decrease the accuracy of the target model by 4.8%, with only 1% edges modified and 100 queries performed.","PeriodicalId":354314,"journal":{"name":"2022 IEEE 34th International Conference on Tools with Artificial Intelligence (ICTAI)","volume":"307 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE 34th International Conference on Tools with Artificial Intelligence (ICTAI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICTAI56018.2022.00082","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
Graph Neural Network (GNN) is a graph representation learning approach for graph-structured data, which has witnessed a remarkable progress in the past few years. As a counterpart, the robustness of such a model has also received considerable attention. Previous studies show that the performance of a well-trained GNN can be faded by black-box adversarial examples significantly. In practice, the attacker can only query the target model with very limited counts, yet the existing methods require hundreds of thousand queries to extend attacks, leading the attacker to be exposed easily. To perform a step forward in addressing this issue, in this paper, we propose a novel attack methods, namely Graph Query-limited Attack (GQA), in which we generate adversarial examples on the surrogate model to fool the target model. Specifically, in GQA, we use contrastive learning to fit the feature extraction layers of the surrogate model in a query-free manner, which can reduce the need of queries. Furthermore, in order to utilize query results sufficiently, we obtain a series of queries with rich information by changing the input iteratively, and storing them in a buffer for recycling usage. Experiments show that GQA can decrease the accuracy of the target model by 4.8%, with only 1% edges modified and 100 queries performed.