{"title":"在联盟学习中推进拜占庭攻击的混合防御","authors":"Kai Yue, Richeng Jin, Chau-Wai Wong, Huaiyu Dai","doi":"arxiv-2409.06474","DOIUrl":null,"url":null,"abstract":"Federated learning (FL) enables multiple clients to collaboratively train a\nglobal model without sharing their local data. Recent studies have highlighted\nthe vulnerability of FL to Byzantine attacks, where malicious clients send\npoisoned updates to degrade model performance. Notably, many attacks have been\ndeveloped targeting specific aggregation rules, whereas various defense\nmechanisms have been designed for dedicated threat models. This paper studies\nthe resilience of an attack-agnostic FL scenario, where the server lacks prior\nknowledge of both the attackers' strategies and the number of malicious clients\ninvolved. We first introduce a hybrid defense against state-of-the-art attacks.\nOur goal is to identify a general-purpose aggregation rule that performs well\non average while also avoiding worst-case vulnerabilities. By adaptively\nselecting from available defenses, we demonstrate that the server remains\nrobust even when confronted with a substantial proportion of poisoned updates.\nTo better understand this resilience, we then assess the attackers' capability\nusing a proxy called client heterogeneity. We also emphasize that the existing\nFL defenses should not be regarded as secure, as demonstrated through the newly\nproposed Trapsetter attack. The proposed attack outperforms other\nstate-of-the-art attacks by further reducing the model test accuracy by 8-10%.\nOur findings highlight the ongoing need for the development of\nByzantine-resilient aggregation algorithms in FL.","PeriodicalId":501422,"journal":{"name":"arXiv - CS - Distributed, Parallel, and Cluster Computing","volume":"50 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Advancing Hybrid Defense for Byzantine Attacks in Federated Learning\",\"authors\":\"Kai Yue, Richeng Jin, Chau-Wai Wong, Huaiyu Dai\",\"doi\":\"arxiv-2409.06474\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Federated learning (FL) enables multiple clients to collaboratively train a\\nglobal model without sharing their local data. Recent studies have highlighted\\nthe vulnerability of FL to Byzantine attacks, where malicious clients send\\npoisoned updates to degrade model performance. Notably, many attacks have been\\ndeveloped targeting specific aggregation rules, whereas various defense\\nmechanisms have been designed for dedicated threat models. This paper studies\\nthe resilience of an attack-agnostic FL scenario, where the server lacks prior\\nknowledge of both the attackers' strategies and the number of malicious clients\\ninvolved. We first introduce a hybrid defense against state-of-the-art attacks.\\nOur goal is to identify a general-purpose aggregation rule that performs well\\non average while also avoiding worst-case vulnerabilities. By adaptively\\nselecting from available defenses, we demonstrate that the server remains\\nrobust even when confronted with a substantial proportion of poisoned updates.\\nTo better understand this resilience, we then assess the attackers' capability\\nusing a proxy called client heterogeneity. We also emphasize that the existing\\nFL defenses should not be regarded as secure, as demonstrated through the newly\\nproposed Trapsetter attack. The proposed attack outperforms other\\nstate-of-the-art attacks by further reducing the model test accuracy by 8-10%.\\nOur findings highlight the ongoing need for the development of\\nByzantine-resilient aggregation algorithms in FL.\",\"PeriodicalId\":501422,\"journal\":{\"name\":\"arXiv - CS - Distributed, Parallel, and Cluster Computing\",\"volume\":\"50 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-09-10\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Distributed, Parallel, and Cluster Computing\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2409.06474\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Distributed, Parallel, and Cluster Computing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.06474","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Advancing Hybrid Defense for Byzantine Attacks in Federated Learning
Federated learning (FL) enables multiple clients to collaboratively train a
global model without sharing their local data. Recent studies have highlighted
the vulnerability of FL to Byzantine attacks, where malicious clients send
poisoned updates to degrade model performance. Notably, many attacks have been
developed targeting specific aggregation rules, whereas various defense
mechanisms have been designed for dedicated threat models. This paper studies
the resilience of an attack-agnostic FL scenario, where the server lacks prior
knowledge of both the attackers' strategies and the number of malicious clients
involved. We first introduce a hybrid defense against state-of-the-art attacks.
Our goal is to identify a general-purpose aggregation rule that performs well
on average while also avoiding worst-case vulnerabilities. By adaptively
selecting from available defenses, we demonstrate that the server remains
robust even when confronted with a substantial proportion of poisoned updates.
To better understand this resilience, we then assess the attackers' capability
using a proxy called client heterogeneity. We also emphasize that the existing
FL defenses should not be regarded as secure, as demonstrated through the newly
proposed Trapsetter attack. The proposed attack outperforms other
state-of-the-art attacks by further reducing the model test accuracy by 8-10%.
Our findings highlight the ongoing need for the development of
Byzantine-resilient aggregation algorithms in FL.