{"title":"FLCom:针对强模型中毒攻击的鲁棒联邦学习","authors":"Yang Li , Jun Xu , Dejun Yang","doi":"10.1016/j.comnet.2025.111442","DOIUrl":null,"url":null,"abstract":"<div><div>Federated learning (FL) is an emerging distributed machine learning framework that enables models to be trained on multiple decentralized devices or servers without transferring data to a centralized server. However, due to its distributed nature, FL is vulnerable to attacks from malicious clients. Although most Byzantine-robust FL methods are designed against model poisoning attacks, they lose effectiveness as the intensity of attacks increases or when new attack strategies emerge. To address these challenges, we propose a novel robust FL method, called FLCom, which leverages outlier detection to defend against model poisoning attacks. FLCom enhances the robustness of FL and outperforms the state-of-the-art methods in accuracy. Additionally, we propose an improved model poisoning attack, called vector-scaling attack (VSA), which exhibits stronger stealthiness against robust aggregation methods. We evaluate both our defense and attack methods under IID and Non-IID settings across three different datasets. The results demonstrate that FLCom achieves higher accuracy than other methods under various attacks, particularly in the Non-IID case. Furthermore, FLCom effectively defends against our proposed VSA, while VSA successfully breaches existing defense mechanisms.</div></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":"269 ","pages":"Article 111442"},"PeriodicalIF":4.4000,"publicationDate":"2025-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"FLCom: Robust federated learning against strong model poisoning attacks\",\"authors\":\"Yang Li , Jun Xu , Dejun Yang\",\"doi\":\"10.1016/j.comnet.2025.111442\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Federated learning (FL) is an emerging distributed machine learning framework that enables models to be trained on multiple decentralized devices or servers without transferring data to a centralized server. However, due to its distributed nature, FL is vulnerable to attacks from malicious clients. Although most Byzantine-robust FL methods are designed against model poisoning attacks, they lose effectiveness as the intensity of attacks increases or when new attack strategies emerge. To address these challenges, we propose a novel robust FL method, called FLCom, which leverages outlier detection to defend against model poisoning attacks. FLCom enhances the robustness of FL and outperforms the state-of-the-art methods in accuracy. Additionally, we propose an improved model poisoning attack, called vector-scaling attack (VSA), which exhibits stronger stealthiness against robust aggregation methods. We evaluate both our defense and attack methods under IID and Non-IID settings across three different datasets. The results demonstrate that FLCom achieves higher accuracy than other methods under various attacks, particularly in the Non-IID case. Furthermore, FLCom effectively defends against our proposed VSA, while VSA successfully breaches existing defense mechanisms.</div></div>\",\"PeriodicalId\":50637,\"journal\":{\"name\":\"Computer Networks\",\"volume\":\"269 \",\"pages\":\"Article 111442\"},\"PeriodicalIF\":4.4000,\"publicationDate\":\"2025-06-17\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Computer Networks\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S1389128625004098\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computer Networks","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1389128625004098","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE","Score":null,"Total":0}
FLCom: Robust federated learning against strong model poisoning attacks
Federated learning (FL) is an emerging distributed machine learning framework that enables models to be trained on multiple decentralized devices or servers without transferring data to a centralized server. However, due to its distributed nature, FL is vulnerable to attacks from malicious clients. Although most Byzantine-robust FL methods are designed against model poisoning attacks, they lose effectiveness as the intensity of attacks increases or when new attack strategies emerge. To address these challenges, we propose a novel robust FL method, called FLCom, which leverages outlier detection to defend against model poisoning attacks. FLCom enhances the robustness of FL and outperforms the state-of-the-art methods in accuracy. Additionally, we propose an improved model poisoning attack, called vector-scaling attack (VSA), which exhibits stronger stealthiness against robust aggregation methods. We evaluate both our defense and attack methods under IID and Non-IID settings across three different datasets. The results demonstrate that FLCom achieves higher accuracy than other methods under various attacks, particularly in the Non-IID case. Furthermore, FLCom effectively defends against our proposed VSA, while VSA successfully breaches existing defense mechanisms.
期刊介绍:
Computer Networks is an international, archival journal providing a publication vehicle for complete coverage of all topics of interest to those involved in the computer communications networking area. The audience includes researchers, managers and operators of networks as well as designers and implementors. The Editorial Board will consider any material for publication that is of interest to those groups.