Single-client GAN-based backdoor attacks for Asynchronous Federated Learning

IF 6.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Siyu Guan, Chunguang Huang, Hai Cheng
{"title":"Single-client GAN-based backdoor attacks for Asynchronous Federated Learning","authors":"Siyu Guan,&nbsp;Chunguang Huang,&nbsp;Hai Cheng","doi":"10.1016/j.neucom.2025.131580","DOIUrl":null,"url":null,"abstract":"<div><div>Federated Learning (FL) enables distributed collaborative training while preserving data privacy; however, it demonstrates significant vulnerability to backdoor attacks. Existing attack methodologies predominantly require control of numerous malicious clients to achieve efficacy and largely neglect asynchronous FL scenarios. In response to these limitations, we propose a novel GAN-based backdoor attack framework capable of injecting effective and covert backdoors with minimal malicious client participation, functioning efficiently across both synchronous and asynchronous environments. Our framework operates effectively with a single malicious client, eliminating the need for coordination among multiple adversarial participants or prior knowledge of benign client data distributions. This reduction in resource requirements enhances the framework's practicality in real-world FL implementations. The malicious client employs a Generative Adversarial Network to synthesize adversarial samples containing predefined triggers, which are subsequently incorporated into local training datasets. The concurrent training on legitimate and triggered data enhances attack effectiveness, while gradient injection—manipulating differences between local and global gradients to introduce strategic noise—facilitates backdoor embedding with improved stealth characteristics. Empirical evaluations demonstrate that in a configuration of 200 clients with a single attacker, our framework achieves attack success rates of 98.66 % on MNIST and 86.29 % on CIFAR-10 datasets. Comprehensive experimentation across both datasets substantiates the framework's effectiveness, imperceptibility, and resilience in synchronous and asynchronous FL environments. This research contributes significant insights into backdoor attack strategies in FL, particularly within asynchronous contexts, and underscores the imperative for developing robust defensive countermeasures.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"657 ","pages":"Article 131580"},"PeriodicalIF":6.5000,"publicationDate":"2025-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neurocomputing","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0925231225022520","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

Federated Learning (FL) enables distributed collaborative training while preserving data privacy; however, it demonstrates significant vulnerability to backdoor attacks. Existing attack methodologies predominantly require control of numerous malicious clients to achieve efficacy and largely neglect asynchronous FL scenarios. In response to these limitations, we propose a novel GAN-based backdoor attack framework capable of injecting effective and covert backdoors with minimal malicious client participation, functioning efficiently across both synchronous and asynchronous environments. Our framework operates effectively with a single malicious client, eliminating the need for coordination among multiple adversarial participants or prior knowledge of benign client data distributions. This reduction in resource requirements enhances the framework's practicality in real-world FL implementations. The malicious client employs a Generative Adversarial Network to synthesize adversarial samples containing predefined triggers, which are subsequently incorporated into local training datasets. The concurrent training on legitimate and triggered data enhances attack effectiveness, while gradient injection—manipulating differences between local and global gradients to introduce strategic noise—facilitates backdoor embedding with improved stealth characteristics. Empirical evaluations demonstrate that in a configuration of 200 clients with a single attacker, our framework achieves attack success rates of 98.66 % on MNIST and 86.29 % on CIFAR-10 datasets. Comprehensive experimentation across both datasets substantiates the framework's effectiveness, imperceptibility, and resilience in synchronous and asynchronous FL environments. This research contributes significant insights into backdoor attack strategies in FL, particularly within asynchronous contexts, and underscores the imperative for developing robust defensive countermeasures.
基于单客户端gan的异步联邦学习后门攻击
联邦学习(FL)在保护数据隐私的同时实现分布式协作训练;然而,它显示出对后门攻击的严重脆弱性。现有的攻击方法主要需要控制大量的恶意客户端来实现有效性,而在很大程度上忽略了异步FL场景。针对这些限制,我们提出了一种新的基于gan的后门攻击框架,能够以最小的恶意客户端参与注入有效和隐蔽的后门,在同步和异步环境中有效运行。我们的框架在单个恶意客户端上有效运行,消除了多个敌对参与者之间协调的需要或对良性客户端数据分布的先验知识。资源需求的减少增强了框架在实际FL实现中的实用性。恶意客户端使用生成式对抗网络来合成包含预定义触发器的对抗样本,这些样本随后被纳入本地训练数据集。合法和触发数据的并行训练提高了攻击效率,而梯度注入——利用局部和全局梯度之间的差异引入策略噪声——有利于后门嵌入,提高了隐身特性。经验评估表明,在单个攻击者的200个客户端配置中,我们的框架在MNIST上实现了98.66 %的攻击成功率,在CIFAR-10数据集上实现了86.29 %的攻击成功率。跨两个数据集的综合实验证实了该框架在同步和异步FL环境中的有效性、不可感知性和弹性。这项研究为FL中的后门攻击策略提供了重要的见解,特别是在异步上下文中,并强调了开发强大的防御对策的必要性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Neurocomputing
Neurocomputing 工程技术-计算机:人工智能
CiteScore
13.10
自引率
10.00%
发文量
1382
审稿时长
70 days
期刊介绍: Neurocomputing publishes articles describing recent fundamental contributions in the field of neurocomputing. Neurocomputing theory, practice and applications are the essential topics being covered.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信