FDBA:联邦学习中针对拜占庭式和自适应攻击的特征引导防御

IF 3.8 2区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS
Chenyu Hu , Qiming Hu , Mingyue Zhang , Zheng Yang
{"title":"FDBA:联邦学习中针对拜占庭式和自适应攻击的特征引导防御","authors":"Chenyu Hu ,&nbsp;Qiming Hu ,&nbsp;Mingyue Zhang ,&nbsp;Zheng Yang","doi":"10.1016/j.jisa.2025.104035","DOIUrl":null,"url":null,"abstract":"<div><div>Federated Learning (FL) is a general paradigm that enables decentralized model training while preserving data privacy, allowing multiple clients to collaboratively train a global model without sharing raw data. With the increasing application of Large Language Models (LLMs) in fields like finance and healthcare, data privacy concerns have grown. Federated LLMs have emerged as a solution, enabling the collaborative improvement of LLMs while protecting sensitive data. However, federated LLMs, like other FL applications, are vulnerable to Byzantine attacks, where one or more malicious clients attempt to poison the global model by corrupting local data or sending crafted local model updates to the server. Existing defenses that focus on directly analyzing local updates struggle with the large parameter sizes of modern models like LLMs. Thus, we need to design more effective defense mechanisms that can scale to models of varying sizes.</div><div>In this work, we propose FDBA, a method designed to enhance robustness and efficiency in FL. Unlike traditional defenses that rely solely on analyzing local model updates, our approach extracts features called PDist from the models to describe the impact of these updates. We propose a cooperative learning mechanism based on PDist, which evaluates features across three dimensions to determine whether the update is malicious or benign. Specifically, FDBA first performs clustering on PDist, and then classifies the clustering results using an additional auxiliary data to efficiently and accurately identify malicious clients. Finally, historical information is leveraged to further enhance the accuracy of the detection. We conduct extensive evaluations on three datasets, and the results show that FDBA effectively defends against both existing Byzantine and adaptive attacks. For example, under six types of Byzantine attacks, FDBA maintains the same accuracy as the global model trained with FedAvg without any attacks. Additionally, we perform evaluations on LLMs, and the results demonstrate that FDBA still achieves high accuracy under representative Byzantine attacks.</div></div>","PeriodicalId":48638,"journal":{"name":"Journal of Information Security and Applications","volume":"90 ","pages":"Article 104035"},"PeriodicalIF":3.8000,"publicationDate":"2025-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"FDBA: Feature-guided Defense against Byzantine and Adaptive attacks in Federated Learning\",\"authors\":\"Chenyu Hu ,&nbsp;Qiming Hu ,&nbsp;Mingyue Zhang ,&nbsp;Zheng Yang\",\"doi\":\"10.1016/j.jisa.2025.104035\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Federated Learning (FL) is a general paradigm that enables decentralized model training while preserving data privacy, allowing multiple clients to collaboratively train a global model without sharing raw data. With the increasing application of Large Language Models (LLMs) in fields like finance and healthcare, data privacy concerns have grown. Federated LLMs have emerged as a solution, enabling the collaborative improvement of LLMs while protecting sensitive data. However, federated LLMs, like other FL applications, are vulnerable to Byzantine attacks, where one or more malicious clients attempt to poison the global model by corrupting local data or sending crafted local model updates to the server. Existing defenses that focus on directly analyzing local updates struggle with the large parameter sizes of modern models like LLMs. Thus, we need to design more effective defense mechanisms that can scale to models of varying sizes.</div><div>In this work, we propose FDBA, a method designed to enhance robustness and efficiency in FL. Unlike traditional defenses that rely solely on analyzing local model updates, our approach extracts features called PDist from the models to describe the impact of these updates. We propose a cooperative learning mechanism based on PDist, which evaluates features across three dimensions to determine whether the update is malicious or benign. Specifically, FDBA first performs clustering on PDist, and then classifies the clustering results using an additional auxiliary data to efficiently and accurately identify malicious clients. Finally, historical information is leveraged to further enhance the accuracy of the detection. We conduct extensive evaluations on three datasets, and the results show that FDBA effectively defends against both existing Byzantine and adaptive attacks. For example, under six types of Byzantine attacks, FDBA maintains the same accuracy as the global model trained with FedAvg without any attacks. Additionally, we perform evaluations on LLMs, and the results demonstrate that FDBA still achieves high accuracy under representative Byzantine attacks.</div></div>\",\"PeriodicalId\":48638,\"journal\":{\"name\":\"Journal of Information Security and Applications\",\"volume\":\"90 \",\"pages\":\"Article 104035\"},\"PeriodicalIF\":3.8000,\"publicationDate\":\"2025-03-26\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Information Security and Applications\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S2214212625000730\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, INFORMATION SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Information Security and Applications","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2214212625000730","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0

摘要

联邦学习(FL)是一种通用范例,它支持分散的模型训练,同时保护数据隐私,允许多个客户端在不共享原始数据的情况下协作训练全局模型。随着大型语言模型(llm)在金融和医疗保健等领域的应用越来越多,数据隐私问题也越来越多。联邦法学硕士已经成为一种解决方案,可以在保护敏感数据的同时协作改进法学硕士。然而,与其他FL应用程序一样,联邦llm容易受到拜占庭攻击,其中一个或多个恶意客户端试图通过破坏本地数据或向服务器发送精心制作的本地模型更新来毒害全局模型。现有的专注于直接分析本地更新的防御措施难以应对llm等现代模型的大参数大小。因此,我们需要设计更有效的防御机制,可以扩展到不同大小的模型。在这项工作中,我们提出了FDBA,这是一种旨在提高FL的鲁棒性和效率的方法。与仅依赖于分析局部模型更新的传统防御不同,我们的方法从模型中提取称为PDist的特征来描述这些更新的影响。我们提出了一种基于PDist的合作学习机制,该机制通过评估三个维度的特征来确定更新是恶意的还是良性的。具体来说,FDBA首先在PDist上执行聚类,然后使用额外的辅助数据对聚类结果进行分类,从而高效准确地识别恶意客户端。最后,利用历史信息进一步提高检测的准确性。我们对三个数据集进行了广泛的评估,结果表明FDBA有效地防御了现有的拜占庭攻击和自适应攻击。例如,在六种类型的拜占庭攻击下,FDBA在没有任何攻击的情况下保持与fedag训练的全局模型相同的准确性。此外,我们对llm进行了评估,结果表明FDBA在具有代表性的拜占庭攻击下仍然具有很高的准确率。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
FDBA: Feature-guided Defense against Byzantine and Adaptive attacks in Federated Learning
Federated Learning (FL) is a general paradigm that enables decentralized model training while preserving data privacy, allowing multiple clients to collaboratively train a global model without sharing raw data. With the increasing application of Large Language Models (LLMs) in fields like finance and healthcare, data privacy concerns have grown. Federated LLMs have emerged as a solution, enabling the collaborative improvement of LLMs while protecting sensitive data. However, federated LLMs, like other FL applications, are vulnerable to Byzantine attacks, where one or more malicious clients attempt to poison the global model by corrupting local data or sending crafted local model updates to the server. Existing defenses that focus on directly analyzing local updates struggle with the large parameter sizes of modern models like LLMs. Thus, we need to design more effective defense mechanisms that can scale to models of varying sizes.
In this work, we propose FDBA, a method designed to enhance robustness and efficiency in FL. Unlike traditional defenses that rely solely on analyzing local model updates, our approach extracts features called PDist from the models to describe the impact of these updates. We propose a cooperative learning mechanism based on PDist, which evaluates features across three dimensions to determine whether the update is malicious or benign. Specifically, FDBA first performs clustering on PDist, and then classifies the clustering results using an additional auxiliary data to efficiently and accurately identify malicious clients. Finally, historical information is leveraged to further enhance the accuracy of the detection. We conduct extensive evaluations on three datasets, and the results show that FDBA effectively defends against both existing Byzantine and adaptive attacks. For example, under six types of Byzantine attacks, FDBA maintains the same accuracy as the global model trained with FedAvg without any attacks. Additionally, we perform evaluations on LLMs, and the results demonstrate that FDBA still achieves high accuracy under representative Byzantine attacks.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Journal of Information Security and Applications
Journal of Information Security and Applications Computer Science-Computer Networks and Communications
CiteScore
10.90
自引率
5.40%
发文量
206
审稿时长
56 days
期刊介绍: Journal of Information Security and Applications (JISA) focuses on the original research and practice-driven applications with relevance to information security and applications. JISA provides a common linkage between a vibrant scientific and research community and industry professionals by offering a clear view on modern problems and challenges in information security, as well as identifying promising scientific and "best-practice" solutions. JISA issues offer a balance between original research work and innovative industrial approaches by internationally renowned information security experts and researchers.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信