FedAMM: Federated Learning Against Majority Malicious Clients Using Robust Aggregation

IF 8 1区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS
Keke Gai;Dongjue Wang;Jing Yu;Liehuang Zhu;Weizhi Meng
{"title":"FedAMM: Federated Learning Against Majority Malicious Clients Using Robust Aggregation","authors":"Keke Gai;Dongjue Wang;Jing Yu;Liehuang Zhu;Weizhi Meng","doi":"10.1109/TIFS.2025.3607273","DOIUrl":null,"url":null,"abstract":"As a collaborative framework designed to safeguard privacy, <italic>Federated Learning</i> (FL) seeks to protect participants’ data throughout the training process. However, the framework still faces security risks from poisoning attacks, arising from the unmonitored process of client-side model updates. Most existing solutions address scenarios where less than half of clients are malicious, i.e., which leaves a significant challenge to defend against attacks when more than half of partici pants are malicious. In this paper, we propose a FL scheme, named FedAMM, that resists backdoor attacks across various data distributions and malicious client ratios. We develop a novel backdoor defense mechanism to filter out malicious models, aiming to reduce the performance degradation of the model. The proposed scheme addresses the challenge of distance measurement in high-dimensional spaces by applying <italic>Principal Component Analysis</i> (PCA) to improve clustering effectiveness. We borrow the idea of critical parameter analysis to enhance discriminative ability in non-iid data scenarios, via assessing the benign or malicious nature of models by comparing the similarity of critical parameters across different models. Finally, our scheme employs a hierarchical noise perturbation to improve the backdoor mitigation rate, effectively eliminating the backdoor and reducing the adverse effects of noise on task accuracy. Through evaluations conducted on multiple datasets, we demonstrate that the proposed scheme achieves superior backdoor defense across diverse client data distributions and different ratios of malicious participants. With 80% malicious clients, FedAMM achieves low backdoor attack success rates of 1.14%, 0.28%, and 5.53% on MNIST, FMNIST, and CIFAR-10, respectively, demonstrating enhanced robustness of FL against backdoor attacks.","PeriodicalId":13492,"journal":{"name":"IEEE Transactions on Information Forensics and Security","volume":"20 ","pages":"9950-9964"},"PeriodicalIF":8.0000,"publicationDate":"2025-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Information Forensics and Security","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/11175562/","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, THEORY & METHODS","Score":null,"Total":0}
引用次数: 0

Abstract

As a collaborative framework designed to safeguard privacy, Federated Learning (FL) seeks to protect participants’ data throughout the training process. However, the framework still faces security risks from poisoning attacks, arising from the unmonitored process of client-side model updates. Most existing solutions address scenarios where less than half of clients are malicious, i.e., which leaves a significant challenge to defend against attacks when more than half of partici pants are malicious. In this paper, we propose a FL scheme, named FedAMM, that resists backdoor attacks across various data distributions and malicious client ratios. We develop a novel backdoor defense mechanism to filter out malicious models, aiming to reduce the performance degradation of the model. The proposed scheme addresses the challenge of distance measurement in high-dimensional spaces by applying Principal Component Analysis (PCA) to improve clustering effectiveness. We borrow the idea of critical parameter analysis to enhance discriminative ability in non-iid data scenarios, via assessing the benign or malicious nature of models by comparing the similarity of critical parameters across different models. Finally, our scheme employs a hierarchical noise perturbation to improve the backdoor mitigation rate, effectively eliminating the backdoor and reducing the adverse effects of noise on task accuracy. Through evaluations conducted on multiple datasets, we demonstrate that the proposed scheme achieves superior backdoor defense across diverse client data distributions and different ratios of malicious participants. With 80% malicious clients, FedAMM achieves low backdoor attack success rates of 1.14%, 0.28%, and 5.53% on MNIST, FMNIST, and CIFAR-10, respectively, demonstrating enhanced robustness of FL against backdoor attacks.
FedAMM:使用鲁棒聚合对抗多数恶意客户端的联邦学习
作为一个旨在保护隐私的协作框架,联邦学习(FL)寻求在整个培训过程中保护参与者的数据。然而,该框架仍然面临来自中毒攻击的安全风险,这种攻击源于客户端模型更新过程的不受监控。大多数现有的解决方案都针对不到一半的客户端是恶意的情况,也就是说,当超过一半的参与者是恶意的时候,这给防御攻击留下了重大的挑战。在本文中,我们提出了一个名为FedAMM的FL方案,它可以抵抗各种数据分布和恶意客户端比率的后门攻击。我们开发了一种新的后门防御机制来过滤恶意模型,旨在减少模型的性能下降。该方法利用主成分分析(PCA)提高聚类效率,解决了高维空间距离测量的难题。我们借用关键参数分析的思想,通过比较不同模型之间关键参数的相似性来评估模型的良性或恶意性质,从而增强非id数据场景下的判别能力。最后,我们的方案采用了分层噪声扰动来提高后门缓解率,有效地消除了后门,降低了噪声对任务精度的不利影响。通过对多个数据集的评估,我们证明了该方案在不同的客户端数据分布和不同比例的恶意参与者中实现了卓越的后门防御。FedAMM在恶意客户端占80%的情况下,对MNIST、FMNIST和CIFAR-10的后门攻击成功率分别为1.14%、0.28%和5.53%,显示FL对后门攻击的鲁棒性增强。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
IEEE Transactions on Information Forensics and Security
IEEE Transactions on Information Forensics and Security 工程技术-工程:电子与电气
CiteScore
14.40
自引率
7.40%
发文量
234
审稿时长
6.5 months
期刊介绍: The IEEE Transactions on Information Forensics and Security covers the sciences, technologies, and applications relating to information forensics, information security, biometrics, surveillance and systems applications that incorporate these features
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信