{"title":"Robust Graph Neural Networks","authors":"","doi":"10.1017/9781108924184.010","DOIUrl":null,"url":null,"abstract":"As the generalizations of traditional DNNs to graphs, GNNs inherit both advantages and disadvantages of traditional DNNs. Like traditional DNNs, GNNs have been shown to be effective in many graph-related tasks such as nodefocused and graph-focused tasks. Traditional DNNs have been demonstrated to be vulnerable to dedicated designed adversarial attacks (Goodfellow et al., 2014b; Xu et al., 2019b). Under adversarial attacks, the victimized samples are perturbed in such a way that they are not easily noticeable, but they can lead to wrong results. It is increasingly evident that GNNs also inherit this drawback. The adversary can generate graph adversarial perturbations by manipulating the graph structure or node features to fool the GNN models. This limitation of GNNs has arisen immense concerns on adopting them in safety-critical applications such as financial systems and risk management. For example, in a credit scoring system, fraudsters can fake connections with several high-credit customers to evade the fraudster detection models; and spammers can easily create fake followers to increase the chance of fake news being recommended and spread. Therefore, we have witnessed more and more research attention to graph adversarial attacks and their countermeasures. In this chapter, we first introduce concepts and definitions of graph adversarial attacks and detail some representative adversarial attack methods on graphs. Then, we discuss representative defense techniques against these adversarial attacks.","PeriodicalId":254746,"journal":{"name":"Deep Learning on Graphs","volume":"17 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Deep Learning on Graphs","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1017/9781108924184.010","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
As the generalizations of traditional DNNs to graphs, GNNs inherit both advantages and disadvantages of traditional DNNs. Like traditional DNNs, GNNs have been shown to be effective in many graph-related tasks such as nodefocused and graph-focused tasks. Traditional DNNs have been demonstrated to be vulnerable to dedicated designed adversarial attacks (Goodfellow et al., 2014b; Xu et al., 2019b). Under adversarial attacks, the victimized samples are perturbed in such a way that they are not easily noticeable, but they can lead to wrong results. It is increasingly evident that GNNs also inherit this drawback. The adversary can generate graph adversarial perturbations by manipulating the graph structure or node features to fool the GNN models. This limitation of GNNs has arisen immense concerns on adopting them in safety-critical applications such as financial systems and risk management. For example, in a credit scoring system, fraudsters can fake connections with several high-credit customers to evade the fraudster detection models; and spammers can easily create fake followers to increase the chance of fake news being recommended and spread. Therefore, we have witnessed more and more research attention to graph adversarial attacks and their countermeasures. In this chapter, we first introduce concepts and definitions of graph adversarial attacks and detail some representative adversarial attack methods on graphs. Then, we discuss representative defense techniques against these adversarial attacks.
作为传统深度神经网络对图的推广,gnn继承了传统深度神经网络的优点和缺点。与传统的深度神经网络一样,gnn已被证明在许多与图相关的任务中是有效的,例如节点分散和以图为中心的任务。传统的深度神经网络已被证明容易受到专门设计的对抗性攻击(Goodfellow等人,2014;Xu et al., 2019b)。在对抗性攻击下,受害样本受到干扰,不容易被注意到,但它们可能导致错误的结果。越来越明显的是,gnn也继承了这个缺点。攻击者可以通过操纵图结构或节点特征来产生图对抗性扰动来欺骗GNN模型。gnn的这种局限性引起了人们对在金融系统和风险管理等安全关键应用中采用它们的极大关注。例如,在信用评分系统中,欺诈者可以伪造与多个高信用客户的联系,以逃避欺诈者检测模型;垃圾邮件发送者可以很容易地创建假粉丝,以增加假新闻被推荐和传播的机会。因此,对图对抗攻击及其对策的研究越来越受到重视。在本章中,我们首先介绍了图对抗攻击的概念和定义,并详细介绍了一些典型的图对抗攻击方法。然后,我们讨论了针对这些对抗性攻击的代表性防御技术。