{"title":"FedAMM: Federated Learning Against Majority Malicious Clients Using Robust Aggregation","authors":"Keke Gai;Dongjue Wang;Jing Yu;Liehuang Zhu;Weizhi Meng","doi":"10.1109/TIFS.2025.3607273","DOIUrl":null,"url":null,"abstract":"As a collaborative framework designed to safeguard privacy, <italic>Federated Learning</i> (FL) seeks to protect participants’ data throughout the training process. However, the framework still faces security risks from poisoning attacks, arising from the unmonitored process of client-side model updates. Most existing solutions address scenarios where less than half of clients are malicious, i.e., which leaves a significant challenge to defend against attacks when more than half of partici pants are malicious. In this paper, we propose a FL scheme, named FedAMM, that resists backdoor attacks across various data distributions and malicious client ratios. We develop a novel backdoor defense mechanism to filter out malicious models, aiming to reduce the performance degradation of the model. The proposed scheme addresses the challenge of distance measurement in high-dimensional spaces by applying <italic>Principal Component Analysis</i> (PCA) to improve clustering effectiveness. We borrow the idea of critical parameter analysis to enhance discriminative ability in non-iid data scenarios, via assessing the benign or malicious nature of models by comparing the similarity of critical parameters across different models. Finally, our scheme employs a hierarchical noise perturbation to improve the backdoor mitigation rate, effectively eliminating the backdoor and reducing the adverse effects of noise on task accuracy. Through evaluations conducted on multiple datasets, we demonstrate that the proposed scheme achieves superior backdoor defense across diverse client data distributions and different ratios of malicious participants. With 80% malicious clients, FedAMM achieves low backdoor attack success rates of 1.14%, 0.28%, and 5.53% on MNIST, FMNIST, and CIFAR-10, respectively, demonstrating enhanced robustness of FL against backdoor attacks.","PeriodicalId":13492,"journal":{"name":"IEEE Transactions on Information Forensics and Security","volume":"20 ","pages":"9950-9964"},"PeriodicalIF":8.0000,"publicationDate":"2025-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Information Forensics and Security","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/11175562/","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, THEORY & METHODS","Score":null,"Total":0}
引用次数: 0
Abstract
As a collaborative framework designed to safeguard privacy, Federated Learning (FL) seeks to protect participants’ data throughout the training process. However, the framework still faces security risks from poisoning attacks, arising from the unmonitored process of client-side model updates. Most existing solutions address scenarios where less than half of clients are malicious, i.e., which leaves a significant challenge to defend against attacks when more than half of partici pants are malicious. In this paper, we propose a FL scheme, named FedAMM, that resists backdoor attacks across various data distributions and malicious client ratios. We develop a novel backdoor defense mechanism to filter out malicious models, aiming to reduce the performance degradation of the model. The proposed scheme addresses the challenge of distance measurement in high-dimensional spaces by applying Principal Component Analysis (PCA) to improve clustering effectiveness. We borrow the idea of critical parameter analysis to enhance discriminative ability in non-iid data scenarios, via assessing the benign or malicious nature of models by comparing the similarity of critical parameters across different models. Finally, our scheme employs a hierarchical noise perturbation to improve the backdoor mitigation rate, effectively eliminating the backdoor and reducing the adverse effects of noise on task accuracy. Through evaluations conducted on multiple datasets, we demonstrate that the proposed scheme achieves superior backdoor defense across diverse client data distributions and different ratios of malicious participants. With 80% malicious clients, FedAMM achieves low backdoor attack success rates of 1.14%, 0.28%, and 5.53% on MNIST, FMNIST, and CIFAR-10, respectively, demonstrating enhanced robustness of FL against backdoor attacks.
期刊介绍:
The IEEE Transactions on Information Forensics and Security covers the sciences, technologies, and applications relating to information forensics, information security, biometrics, surveillance and systems applications that incorporate these features