{"title":"UDFed: A Universal Defense Scheme for Various Poisoning Attacks on Federated Learning","authors":"Jieyi Deng;Congduan Li;Nanfeng Zhang;Jingfeng Yang;Jun Gao","doi":"10.1109/TIFS.2025.3611126","DOIUrl":null,"url":null,"abstract":"Federated learning (FL), as a distributed machine learning paradigm with privacy protection, has garnered significant attention since it prevents the exchange of raw local data. However, FL remains vulnerable to poisoning attacks, including data contamination and gradient manipulation. Moreover, attackers may launch individual or collusive attacks, complicating the identification of malicious clients. To address these challenges, we propose a universal poisoning defense framework incorporating three key strategies. First, we decouple client identities from gradients through anonymous obfuscation and enhance privacy with differential noise injection. Second, we detect potential detect potential collusive attackers via a joint similarity-based approach. Third, we apply an iterative low rank approximation-based anomaly detection to amplify discrepancies between benign and malicious clients and progressively filter out attackers. We theoretically demonstrate that anonymous obfuscation can enhance the privacy protection capability of differential privacy. Additionally, experimental results further validate that our scheme is comparable to or outperforms state-of-the-art defense methods against a variety of data and model poisoning attacks.","PeriodicalId":13492,"journal":{"name":"IEEE Transactions on Information Forensics and Security","volume":"20 ","pages":"10480-10494"},"PeriodicalIF":8.0000,"publicationDate":"2025-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Information Forensics and Security","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/11168922/","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, THEORY & METHODS","Score":null,"Total":0}
引用次数: 0
Abstract
Federated learning (FL), as a distributed machine learning paradigm with privacy protection, has garnered significant attention since it prevents the exchange of raw local data. However, FL remains vulnerable to poisoning attacks, including data contamination and gradient manipulation. Moreover, attackers may launch individual or collusive attacks, complicating the identification of malicious clients. To address these challenges, we propose a universal poisoning defense framework incorporating three key strategies. First, we decouple client identities from gradients through anonymous obfuscation and enhance privacy with differential noise injection. Second, we detect potential detect potential collusive attackers via a joint similarity-based approach. Third, we apply an iterative low rank approximation-based anomaly detection to amplify discrepancies between benign and malicious clients and progressively filter out attackers. We theoretically demonstrate that anonymous obfuscation can enhance the privacy protection capability of differential privacy. Additionally, experimental results further validate that our scheme is comparable to or outperforms state-of-the-art defense methods against a variety of data and model poisoning attacks.
期刊介绍:
The IEEE Transactions on Information Forensics and Security covers the sciences, technologies, and applications relating to information forensics, information security, biometrics, surveillance and systems applications that incorporate these features