课程图表中毒

Hanwen Liu, P. Zhao, Tingyang Xu, Yatao Bian, Junzhou Huang, Yuesheng Zhu, Yadong Mu
{"title":"课程图表中毒","authors":"Hanwen Liu, P. Zhao, Tingyang Xu, Yatao Bian, Junzhou Huang, Yuesheng Zhu, Yadong Mu","doi":"10.1145/3543507.3583211","DOIUrl":null,"url":null,"abstract":"Despite the success of graph neural networks (GNNs) over the Web in recent years, the typical transductive learning setting for node classification requires GNNs to be retrained frequently, making them vulnerable to poisoning attacks by corrupting the training graph. Poisoning attacks on graphs are, however, non-trivial as the attack space is potentially large, and the discrete graph structure makes the poisoning function non-differentiable. In this paper, we revisit the bi-level optimization problem in graph poisoning and propose a novel graph poisoning method, termed Curriculum Graph Poisoning (CuGPo), inspired by curriculum learning. In contrast to other poisoning attacks that use heuristics or directly optimize the graph, our method learns to generate poisoned graphs from basic adversarial knowledge first and advanced knowledge later. Specifically, for the outer optimization, we utilize the slightly perturbed graphs which represent the easy poisoning task at the beginning, and then enlarge the attack space until the final; for the inner optimization, we firstly exploit the knowledge from the clean graph and then adapt quickly to perturbed graphs to obtain the adversarial knowledge. Extensive experiments demonstrate that CuGPo achieves state-of-the-art performance in graph poisoning attacks.","PeriodicalId":296351,"journal":{"name":"Proceedings of the ACM Web Conference 2023","volume":"4 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Curriculum Graph Poisoning\",\"authors\":\"Hanwen Liu, P. Zhao, Tingyang Xu, Yatao Bian, Junzhou Huang, Yuesheng Zhu, Yadong Mu\",\"doi\":\"10.1145/3543507.3583211\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Despite the success of graph neural networks (GNNs) over the Web in recent years, the typical transductive learning setting for node classification requires GNNs to be retrained frequently, making them vulnerable to poisoning attacks by corrupting the training graph. Poisoning attacks on graphs are, however, non-trivial as the attack space is potentially large, and the discrete graph structure makes the poisoning function non-differentiable. In this paper, we revisit the bi-level optimization problem in graph poisoning and propose a novel graph poisoning method, termed Curriculum Graph Poisoning (CuGPo), inspired by curriculum learning. In contrast to other poisoning attacks that use heuristics or directly optimize the graph, our method learns to generate poisoned graphs from basic adversarial knowledge first and advanced knowledge later. Specifically, for the outer optimization, we utilize the slightly perturbed graphs which represent the easy poisoning task at the beginning, and then enlarge the attack space until the final; for the inner optimization, we firstly exploit the knowledge from the clean graph and then adapt quickly to perturbed graphs to obtain the adversarial knowledge. Extensive experiments demonstrate that CuGPo achieves state-of-the-art performance in graph poisoning attacks.\",\"PeriodicalId\":296351,\"journal\":{\"name\":\"Proceedings of the ACM Web Conference 2023\",\"volume\":\"4 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-04-30\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the ACM Web Conference 2023\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3543507.3583211\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the ACM Web Conference 2023","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3543507.3583211","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

摘要

尽管近年来图神经网络(gnn)在Web上取得了成功,但典型的节点分类的转导学习设置需要频繁地重新训练gnn,这使得它们容易通过破坏训练图而受到中毒攻击。然而,图上的中毒攻击是不平凡的,因为攻击空间可能很大,并且离散图结构使得中毒函数不可微。在本文中,我们重新审视了图中毒中的双层优化问题,并提出了一种新的图中毒方法,称为课程图中毒(CuGPo)。与其他使用启发式或直接优化图的中毒攻击相比,我们的方法首先从基本的对抗性知识中学习生成中毒图,然后再从高级知识中学习生成中毒图。具体而言,对于外部优化,我们在开始时利用代表易中毒任务的微扰动图,然后扩大攻击空间,直到最终;对于内部优化,我们首先利用干净图中的知识,然后快速适应摄动图以获得对抗性知识。大量的实验表明,CuGPo在图中毒攻击中达到了最先进的性能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Curriculum Graph Poisoning
Despite the success of graph neural networks (GNNs) over the Web in recent years, the typical transductive learning setting for node classification requires GNNs to be retrained frequently, making them vulnerable to poisoning attacks by corrupting the training graph. Poisoning attacks on graphs are, however, non-trivial as the attack space is potentially large, and the discrete graph structure makes the poisoning function non-differentiable. In this paper, we revisit the bi-level optimization problem in graph poisoning and propose a novel graph poisoning method, termed Curriculum Graph Poisoning (CuGPo), inspired by curriculum learning. In contrast to other poisoning attacks that use heuristics or directly optimize the graph, our method learns to generate poisoned graphs from basic adversarial knowledge first and advanced knowledge later. Specifically, for the outer optimization, we utilize the slightly perturbed graphs which represent the easy poisoning task at the beginning, and then enlarge the attack space until the final; for the inner optimization, we firstly exploit the knowledge from the clean graph and then adapt quickly to perturbed graphs to obtain the adversarial knowledge. Extensive experiments demonstrate that CuGPo achieves state-of-the-art performance in graph poisoning attacks.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信