Enhancing robust node classification via information competition: An improved adversarial resilience method for graph attacks

IF 3.4 2区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Yong Huang, Yao Yang, Qiao Han, Xinling Guo, Yiteng Zhai, Baoping Cheng
{"title":"Enhancing robust node classification via information competition: An improved adversarial resilience method for graph attacks","authors":"Yong Huang,&nbsp;Yao Yang,&nbsp;Qiao Han,&nbsp;Xinling Guo,&nbsp;Yiteng Zhai,&nbsp;Baoping Cheng","doi":"10.1007/s10489-025-06478-2","DOIUrl":null,"url":null,"abstract":"<div><p>Graph neural networks (GNNs) demonstrate their effectiveness in facilitating node classification and a range of graph-based tasks. However, recent studies have revealed that GNNs can be vulnerable to various adversarial attacks. Despite various defense strategies, ranging from attack-agnostic defenses to attack-oriented defenses that have been proposed to mitigate the impact of adversarial attacks on graph data, effectively learning attack-agnostic graph representation remains an open challenge. This paper introduces a novel information Competition-based framework for Graph Neural Networks (i.e., <i>iC</i>-GNN, e.g., <i>iC</i>-GCN, <i>iC</i>-GAT, etc.) to enhance the robustness of GNNs against various adversarial attacks in node classifications. Through the use of graph reconstruction and low-rank approximation, our approach learns diversified graph representations to collaboratively perform node classifications. Meanwhile, mutual information constraints are utilized on different graph representations to ensure diversity and competition in graph features. The experimental results indicate that within the proposed framework, <i>iC</i>-GCN outperforms other graph defense frameworks in countering a wide range of targeted and non-targeted adversarial attacks in both evasion and poisoning training scenarios. Additionally, this concept has been extended to encompass other widely utilized GNN models like <i>iC</i>-GAT and <i>iC</i>-SAGE. All <i>iC</i>-GNN models demonstrate effective defense capabilities, demonstrating comparable resilience to adversarial attacks. This underscores the superiority and scalable nature of the <i>iC</i>-GNN framework, providing opportunities for a variety of graph learning applications.</p></div>","PeriodicalId":8041,"journal":{"name":"Applied Intelligence","volume":"55 7","pages":""},"PeriodicalIF":3.4000,"publicationDate":"2025-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Applied Intelligence","FirstCategoryId":"94","ListUrlMain":"https://link.springer.com/article/10.1007/s10489-025-06478-2","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

Graph neural networks (GNNs) demonstrate their effectiveness in facilitating node classification and a range of graph-based tasks. However, recent studies have revealed that GNNs can be vulnerable to various adversarial attacks. Despite various defense strategies, ranging from attack-agnostic defenses to attack-oriented defenses that have been proposed to mitigate the impact of adversarial attacks on graph data, effectively learning attack-agnostic graph representation remains an open challenge. This paper introduces a novel information Competition-based framework for Graph Neural Networks (i.e., iC-GNN, e.g., iC-GCN, iC-GAT, etc.) to enhance the robustness of GNNs against various adversarial attacks in node classifications. Through the use of graph reconstruction and low-rank approximation, our approach learns diversified graph representations to collaboratively perform node classifications. Meanwhile, mutual information constraints are utilized on different graph representations to ensure diversity and competition in graph features. The experimental results indicate that within the proposed framework, iC-GCN outperforms other graph defense frameworks in countering a wide range of targeted and non-targeted adversarial attacks in both evasion and poisoning training scenarios. Additionally, this concept has been extended to encompass other widely utilized GNN models like iC-GAT and iC-SAGE. All iC-GNN models demonstrate effective defense capabilities, demonstrating comparable resilience to adversarial attacks. This underscores the superiority and scalable nature of the iC-GNN framework, providing opportunities for a variety of graph learning applications.

通过信息竞争增强鲁棒节点分类:一种改进的图攻击对抗弹性方法
图神经网络(gnn)在促进节点分类和一系列基于图的任务方面证明了它们的有效性。然而,最近的研究表明,gnn可能容易受到各种对抗性攻击。尽管已经提出了各种防御策略,从攻击不可知论防御到面向攻击的防御,以减轻对抗性攻击对图数据的影响,但有效地学习攻击不可知论图表示仍然是一个开放的挑战。本文引入了一种新的基于信息竞争的图神经网络框架(即iC-GNN,例如iC-GCN, iC-GAT等),以增强gnn在节点分类中对各种对抗性攻击的鲁棒性。通过使用图重构和低秩近似,我们的方法学习不同的图表示来协同执行节点分类。同时,对不同的图表示进行互信息约束,保证图特征的多样性和竞争性。实验结果表明,在提出的框架内,iC-GCN在逃避和投毒训练场景下对抗广泛的目标和非目标对抗性攻击方面优于其他图防御框架。此外,这个概念已经扩展到包括其他广泛使用的GNN模型,如iC-GAT和iC-SAGE。所有iC-GNN模型都展示了有效的防御能力,展示了对抗性攻击的可比弹性。这强调了iC-GNN框架的优越性和可扩展性,为各种图学习应用提供了机会。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Applied Intelligence
Applied Intelligence 工程技术-计算机:人工智能
CiteScore
6.60
自引率
20.80%
发文量
1361
审稿时长
5.9 months
期刊介绍: With a focus on research in artificial intelligence and neural networks, this journal addresses issues involving solutions of real-life manufacturing, defense, management, government and industrial problems which are too complex to be solved through conventional approaches and require the simulation of intelligent thought processes, heuristics, applications of knowledge, and distributed and parallel processing. The integration of these multiple approaches in solving complex problems is of particular importance. The journal presents new and original research and technological developments, addressing real and complex issues applicable to difficult problems. It provides a medium for exchanging scientific research and technological achievements accomplished by the international community.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信