On the security and privacy of federated learning: A survey with attacks, defenses, frameworks, applications, and future directions

IF 15.5 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Information Fusion Pub Date : 2026-07-01 Epub Date: 2026-01-16 DOI:10.1016/j.inffus.2026.104155
Daniel M. Jimenez-Gutierrez , Yelizaveta Falkouskaya , José L. Hernandez-Ramos , Aris Anagnostopoulos , Ioannis Chatzigiannakis , Andrea Vitaletti
{"title":"On the security and privacy of federated learning: A survey with attacks, defenses, frameworks, applications, and future directions","authors":"Daniel M. Jimenez-Gutierrez ,&nbsp;Yelizaveta Falkouskaya ,&nbsp;José L. Hernandez-Ramos ,&nbsp;Aris Anagnostopoulos ,&nbsp;Ioannis Chatzigiannakis ,&nbsp;Andrea Vitaletti","doi":"10.1016/j.inffus.2026.104155","DOIUrl":null,"url":null,"abstract":"<div><div>Federated Learning (FL) is an emerging distributed machine learning paradigm enabling multiple clients to train a global model collaboratively without sharing their raw data. While FL enhances data privacy by design, it remains vulnerable to various security and privacy threats. This survey provides a comprehensive overview of 203 papers regarding the state-of-the-art attacks and defense mechanisms developed to address these challenges, categorizing them into security-enhancing and privacy-preserving techniques. Security-enhancing methods aim to improve FL robustness against malicious behaviors such as byzantine attacks, poisoning, and Sybil attacks. At the same time, privacy-preserving techniques focus on protecting sensitive data through cryptographic approaches, differential privacy, and secure aggregation. We critically analyze the strengths and limitations of existing methods, highlight the trade-offs between privacy, security, and model performance, and discuss the implications of non-IID data distributions on the effectiveness of these defenses. Furthermore, we identify open research challenges and future directions, including the need for scalable, adaptive, and energy-efficient solutions operating in dynamic and heterogeneous FL environments. Our survey aims to guide researchers and practitioners in developing robust and privacy-preserving FL systems, fostering advancements safeguarding collaborative learning frameworks’ integrity and confidentiality.</div></div>","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":"131 ","pages":"Article 104155"},"PeriodicalIF":15.5000,"publicationDate":"2026-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Information Fusion","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1566253526000345","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2026/1/16 0:00:00","PubModel":"Epub","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

Federated Learning (FL) is an emerging distributed machine learning paradigm enabling multiple clients to train a global model collaboratively without sharing their raw data. While FL enhances data privacy by design, it remains vulnerable to various security and privacy threats. This survey provides a comprehensive overview of 203 papers regarding the state-of-the-art attacks and defense mechanisms developed to address these challenges, categorizing them into security-enhancing and privacy-preserving techniques. Security-enhancing methods aim to improve FL robustness against malicious behaviors such as byzantine attacks, poisoning, and Sybil attacks. At the same time, privacy-preserving techniques focus on protecting sensitive data through cryptographic approaches, differential privacy, and secure aggregation. We critically analyze the strengths and limitations of existing methods, highlight the trade-offs between privacy, security, and model performance, and discuss the implications of non-IID data distributions on the effectiveness of these defenses. Furthermore, we identify open research challenges and future directions, including the need for scalable, adaptive, and energy-efficient solutions operating in dynamic and heterogeneous FL environments. Our survey aims to guide researchers and practitioners in developing robust and privacy-preserving FL systems, fostering advancements safeguarding collaborative learning frameworks’ integrity and confidentiality.
关于联邦学习的安全和隐私:攻击、防御、框架、应用和未来方向的调查
联邦学习(FL)是一种新兴的分布式机器学习范式,使多个客户端能够在不共享原始数据的情况下协作训练全局模型。虽然FL通过设计增强了数据隐私,但它仍然容易受到各种安全和隐私威胁。本调查提供了203篇论文的全面概述,这些论文涉及为应对这些挑战而开发的最先进的攻击和防御机制,将它们分为安全增强和隐私保护技术。安全增强方法旨在提高FL对拜占庭攻击、中毒攻击和Sybil攻击等恶意行为的鲁棒性。同时,隐私保护技术侧重于通过加密方法、差分隐私和安全聚合来保护敏感数据。我们批判性地分析了现有方法的优势和局限性,强调了隐私、安全性和模型性能之间的权衡,并讨论了非iid数据分布对这些防御有效性的影响。此外,我们确定了开放的研究挑战和未来的方向,包括在动态和异构FL环境中运行的可扩展、自适应和节能解决方案的需求。我们的调查旨在指导研究人员和从业人员开发强大的、保护隐私的FL系统,促进进步,保护协作学习框架的完整性和保密性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Information Fusion
Information Fusion 工程技术-计算机:理论方法
CiteScore
33.20
自引率
4.30%
发文量
161
审稿时长
7.9 months
期刊介绍: Information Fusion serves as a central platform for showcasing advancements in multi-sensor, multi-source, multi-process information fusion, fostering collaboration among diverse disciplines driving its progress. It is the leading outlet for sharing research and development in this field, focusing on architectures, algorithms, and applications. Papers dealing with fundamental theoretical analyses as well as those demonstrating their application to real-world problems will be welcome.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信
小红书