FDBA: Feature-guided Defense against Byzantine and Adaptive attacks in Federated Learning

IF 3.8 2区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS
Chenyu Hu , Qiming Hu , Mingyue Zhang , Zheng Yang
{"title":"FDBA: Feature-guided Defense against Byzantine and Adaptive attacks in Federated Learning","authors":"Chenyu Hu ,&nbsp;Qiming Hu ,&nbsp;Mingyue Zhang ,&nbsp;Zheng Yang","doi":"10.1016/j.jisa.2025.104035","DOIUrl":null,"url":null,"abstract":"<div><div>Federated Learning (FL) is a general paradigm that enables decentralized model training while preserving data privacy, allowing multiple clients to collaboratively train a global model without sharing raw data. With the increasing application of Large Language Models (LLMs) in fields like finance and healthcare, data privacy concerns have grown. Federated LLMs have emerged as a solution, enabling the collaborative improvement of LLMs while protecting sensitive data. However, federated LLMs, like other FL applications, are vulnerable to Byzantine attacks, where one or more malicious clients attempt to poison the global model by corrupting local data or sending crafted local model updates to the server. Existing defenses that focus on directly analyzing local updates struggle with the large parameter sizes of modern models like LLMs. Thus, we need to design more effective defense mechanisms that can scale to models of varying sizes.</div><div>In this work, we propose FDBA, a method designed to enhance robustness and efficiency in FL. Unlike traditional defenses that rely solely on analyzing local model updates, our approach extracts features called PDist from the models to describe the impact of these updates. We propose a cooperative learning mechanism based on PDist, which evaluates features across three dimensions to determine whether the update is malicious or benign. Specifically, FDBA first performs clustering on PDist, and then classifies the clustering results using an additional auxiliary data to efficiently and accurately identify malicious clients. Finally, historical information is leveraged to further enhance the accuracy of the detection. We conduct extensive evaluations on three datasets, and the results show that FDBA effectively defends against both existing Byzantine and adaptive attacks. For example, under six types of Byzantine attacks, FDBA maintains the same accuracy as the global model trained with FedAvg without any attacks. Additionally, we perform evaluations on LLMs, and the results demonstrate that FDBA still achieves high accuracy under representative Byzantine attacks.</div></div>","PeriodicalId":48638,"journal":{"name":"Journal of Information Security and Applications","volume":"90 ","pages":"Article 104035"},"PeriodicalIF":3.8000,"publicationDate":"2025-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Information Security and Applications","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2214212625000730","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0

Abstract

Federated Learning (FL) is a general paradigm that enables decentralized model training while preserving data privacy, allowing multiple clients to collaboratively train a global model without sharing raw data. With the increasing application of Large Language Models (LLMs) in fields like finance and healthcare, data privacy concerns have grown. Federated LLMs have emerged as a solution, enabling the collaborative improvement of LLMs while protecting sensitive data. However, federated LLMs, like other FL applications, are vulnerable to Byzantine attacks, where one or more malicious clients attempt to poison the global model by corrupting local data or sending crafted local model updates to the server. Existing defenses that focus on directly analyzing local updates struggle with the large parameter sizes of modern models like LLMs. Thus, we need to design more effective defense mechanisms that can scale to models of varying sizes.
In this work, we propose FDBA, a method designed to enhance robustness and efficiency in FL. Unlike traditional defenses that rely solely on analyzing local model updates, our approach extracts features called PDist from the models to describe the impact of these updates. We propose a cooperative learning mechanism based on PDist, which evaluates features across three dimensions to determine whether the update is malicious or benign. Specifically, FDBA first performs clustering on PDist, and then classifies the clustering results using an additional auxiliary data to efficiently and accurately identify malicious clients. Finally, historical information is leveraged to further enhance the accuracy of the detection. We conduct extensive evaluations on three datasets, and the results show that FDBA effectively defends against both existing Byzantine and adaptive attacks. For example, under six types of Byzantine attacks, FDBA maintains the same accuracy as the global model trained with FedAvg without any attacks. Additionally, we perform evaluations on LLMs, and the results demonstrate that FDBA still achieves high accuracy under representative Byzantine attacks.
求助全文
约1分钟内获得全文 求助全文
来源期刊
Journal of Information Security and Applications
Journal of Information Security and Applications Computer Science-Computer Networks and Communications
CiteScore
10.90
自引率
5.40%
发文量
206
审稿时长
56 days
期刊介绍: Journal of Information Security and Applications (JISA) focuses on the original research and practice-driven applications with relevance to information security and applications. JISA provides a common linkage between a vibrant scientific and research community and industry professionals by offering a clear view on modern problems and challenges in information security, as well as identifying promising scientific and "best-practice" solutions. JISA issues offer a balance between original research work and innovative industrial approaches by internationally renowned information security experts and researchers.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信