基于gnn的联邦学习中的标签翻转攻击

IF 6.7 2区 计算机科学 Q1 ENGINEERING, MULTIDISCIPLINARY
Shanqing Yu;Jie Shen;Shaocong Xu;Jinhuan Wang;Zeyu Wang;Qi Xuan
{"title":"基于gnn的联邦学习中的标签翻转攻击","authors":"Shanqing Yu;Jie Shen;Shaocong Xu;Jinhuan Wang;Zeyu Wang;Qi Xuan","doi":"10.1109/TNSE.2025.3528831","DOIUrl":null,"url":null,"abstract":"Federated learning offers multi-party collaborative training but also poses several potential security risks. These security issues have been studied more extensively in the context of basic image models, but it is relatively less explored in the field of graphs. Compared to various existing graph-based attack methods, the label-flipping attack does not need to change the graph structure and it is highly stealthy. Therefore, this paper explores a Graph Federated Label Flipping Attack (Graph-FLFA) and proposes a new malicious gradient computation strategy for federated graph models. The goal of this attack method is to maximally disrupt the classification results of specific nodes in the node classification task, without affecting the classification performance of other nodes. This strategy exhibits strong specificity and stealthiness, effectively balancing the influence of various labels and ensuring significant attack effects even when the poisoning ratio is very low. Extensive experiments on four benchmark datasets demonstrate that Graph-FLFA has a high attack success rate in different GNN-based models, achieving the most advanced attack performance. Furthermore, it has the capability to evade detection methods employed in defensive measures.","PeriodicalId":54229,"journal":{"name":"IEEE Transactions on Network Science and Engineering","volume":"12 2","pages":"1357-1368"},"PeriodicalIF":6.7000,"publicationDate":"2025-01-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Label-Flipping Attacks in GNN-Based Federated Learning\",\"authors\":\"Shanqing Yu;Jie Shen;Shaocong Xu;Jinhuan Wang;Zeyu Wang;Qi Xuan\",\"doi\":\"10.1109/TNSE.2025.3528831\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Federated learning offers multi-party collaborative training but also poses several potential security risks. These security issues have been studied more extensively in the context of basic image models, but it is relatively less explored in the field of graphs. Compared to various existing graph-based attack methods, the label-flipping attack does not need to change the graph structure and it is highly stealthy. Therefore, this paper explores a Graph Federated Label Flipping Attack (Graph-FLFA) and proposes a new malicious gradient computation strategy for federated graph models. The goal of this attack method is to maximally disrupt the classification results of specific nodes in the node classification task, without affecting the classification performance of other nodes. This strategy exhibits strong specificity and stealthiness, effectively balancing the influence of various labels and ensuring significant attack effects even when the poisoning ratio is very low. Extensive experiments on four benchmark datasets demonstrate that Graph-FLFA has a high attack success rate in different GNN-based models, achieving the most advanced attack performance. Furthermore, it has the capability to evade detection methods employed in defensive measures.\",\"PeriodicalId\":54229,\"journal\":{\"name\":\"IEEE Transactions on Network Science and Engineering\",\"volume\":\"12 2\",\"pages\":\"1357-1368\"},\"PeriodicalIF\":6.7000,\"publicationDate\":\"2025-01-15\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Transactions on Network Science and Engineering\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10839587/\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"ENGINEERING, MULTIDISCIPLINARY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Network Science and Engineering","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10839587/","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, MULTIDISCIPLINARY","Score":null,"Total":0}
引用次数: 0

摘要

联邦学习提供了多方协作培训,但也带来了一些潜在的安全风险。这些安全问题已经在基本图像模型的背景下进行了更广泛的研究,但在图领域的探索相对较少。与现有的各种基于图的攻击方法相比,标签翻转攻击不需要改变图的结构,具有很强的隐蔽性。为此,本文研究了图联邦标签翻转攻击(Graph- flfa),提出了一种针对联邦图模型的恶意梯度计算策略。这种攻击方法的目标是在不影响其他节点分类性能的前提下,最大限度地破坏节点分类任务中特定节点的分类结果。该策略具有很强的专一性和隐蔽性,可以有效平衡各种标签的影响,即使在中毒比很低的情况下也能保证显著的攻击效果。在四个基准数据集上的大量实验表明,Graph-FLFA在不同基于gnn的模型中具有较高的攻击成功率,实现了最先进的攻击性能。此外,它有能力逃避防御措施中使用的探测方法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Label-Flipping Attacks in GNN-Based Federated Learning
Federated learning offers multi-party collaborative training but also poses several potential security risks. These security issues have been studied more extensively in the context of basic image models, but it is relatively less explored in the field of graphs. Compared to various existing graph-based attack methods, the label-flipping attack does not need to change the graph structure and it is highly stealthy. Therefore, this paper explores a Graph Federated Label Flipping Attack (Graph-FLFA) and proposes a new malicious gradient computation strategy for federated graph models. The goal of this attack method is to maximally disrupt the classification results of specific nodes in the node classification task, without affecting the classification performance of other nodes. This strategy exhibits strong specificity and stealthiness, effectively balancing the influence of various labels and ensuring significant attack effects even when the poisoning ratio is very low. Extensive experiments on four benchmark datasets demonstrate that Graph-FLFA has a high attack success rate in different GNN-based models, achieving the most advanced attack performance. Furthermore, it has the capability to evade detection methods employed in defensive measures.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
IEEE Transactions on Network Science and Engineering
IEEE Transactions on Network Science and Engineering Engineering-Control and Systems Engineering
CiteScore
12.60
自引率
9.10%
发文量
393
期刊介绍: The proposed journal, called the IEEE Transactions on Network Science and Engineering (TNSE), is committed to timely publishing of peer-reviewed technical articles that deal with the theory and applications of network science and the interconnections among the elements in a system that form a network. In particular, the IEEE Transactions on Network Science and Engineering publishes articles on understanding, prediction, and control of structures and behaviors of networks at the fundamental level. The types of networks covered include physical or engineered networks, information networks, biological networks, semantic networks, economic networks, social networks, and ecological networks. Aimed at discovering common principles that govern network structures, network functionalities and behaviors of networks, the journal seeks articles on understanding, prediction, and control of structures and behaviors of networks. Another trans-disciplinary focus of the IEEE Transactions on Network Science and Engineering is the interactions between and co-evolution of different genres of networks.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信