Siquan Huang;Yijiang Li;Chong Chen;Ying Gao;Xiping Hu
{"title":"FedID: Enhancing Federated Learning Security Through Dynamic Identification","authors":"Siquan Huang;Yijiang Li;Chong Chen;Ying Gao;Xiping Hu","doi":"10.1109/TPAMI.2025.3581555","DOIUrl":null,"url":null,"abstract":"Federated learning (FL), recognized for its decentralized and privacy-preserving nature, faces vulnerabilities to backdoor attacks that aim to manipulate the model’s behavior on attacker-chosen inputs. Most existing defenses based on statistical differences take effect only against specific attacks. This limitation becomes significantly pronounced when malicious gradients closely resemble benign ones or the data exhibits non-IID characteristics, making the defenses ineffective against stealthy attacks. This paper revisits distance-based defense methods and uncovers two critical insights: First, Euclidean distance becomes meaningless in high dimensions. Second, a single metric cannot identify malicious gradients with diverse characteristics. As a remedy, we propose FedID, a simple yet effective strategy employing multiple metrics with dynamic weighting for adaptive backdoor detection. Besides, we present a modified z-score approach to select the gradients for aggregation. Notably, FedID does not rely on predefined assumptions about attack settings or data distributions and minimally impacts benign performance. We conduct extensive experiments on various datasets and attack scenarios to assess its effectiveness. FedID consistently outperforms previous defenses, particularly excelling in challenging Edge-case PGD scenarios. Our experiments highlight its robustness against adaptive attacks tailored to break the proposed defense and adaptability to a wide range of non-IID data distributions without compromising benign performance.","PeriodicalId":94034,"journal":{"name":"IEEE transactions on pattern analysis and machine intelligence","volume":"47 10","pages":"8907-8922"},"PeriodicalIF":18.6000,"publicationDate":"2025-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on pattern analysis and machine intelligence","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/11045524/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Federated learning (FL), recognized for its decentralized and privacy-preserving nature, faces vulnerabilities to backdoor attacks that aim to manipulate the model’s behavior on attacker-chosen inputs. Most existing defenses based on statistical differences take effect only against specific attacks. This limitation becomes significantly pronounced when malicious gradients closely resemble benign ones or the data exhibits non-IID characteristics, making the defenses ineffective against stealthy attacks. This paper revisits distance-based defense methods and uncovers two critical insights: First, Euclidean distance becomes meaningless in high dimensions. Second, a single metric cannot identify malicious gradients with diverse characteristics. As a remedy, we propose FedID, a simple yet effective strategy employing multiple metrics with dynamic weighting for adaptive backdoor detection. Besides, we present a modified z-score approach to select the gradients for aggregation. Notably, FedID does not rely on predefined assumptions about attack settings or data distributions and minimally impacts benign performance. We conduct extensive experiments on various datasets and attack scenarios to assess its effectiveness. FedID consistently outperforms previous defenses, particularly excelling in challenging Edge-case PGD scenarios. Our experiments highlight its robustness against adaptive attacks tailored to break the proposed defense and adaptability to a wide range of non-IID data distributions without compromising benign performance.