漂移:基于dct的鲁棒和智能联邦学习,具有可信任的隐私

IF 6.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Qihao Dong , Yang Bai , Mang Su , Yansong Gao , Anmin Fu
{"title":"漂移:基于dct的鲁棒和智能联邦学习,具有可信任的隐私","authors":"Qihao Dong ,&nbsp;Yang Bai ,&nbsp;Mang Su ,&nbsp;Yansong Gao ,&nbsp;Anmin Fu","doi":"10.1016/j.neucom.2025.131697","DOIUrl":null,"url":null,"abstract":"<div><div>Federated Learning (FL) allows collaborative model training across decentralized clients without sharing private data. However, traditional FL frameworks face dual challenges: vulnerability to Byzantine attacks (where malicious clients submit adversarial model updates) and privacy breaches (where curious clients infer sensitive information from exchanged parameters), exacerbated by decentralized operations and unencrypted communications. While existing work addresses robustness or privacy individually, the interplay between defense mechanisms, particularly the trade-off between attack resilience and utility degradation caused by privacy safeguards, remains understudied. To bridge this gap, we propose <em>DRIFT</em>, a novel FL framework that simultaneously achieves Byzantine robustness and privacy preservation. Our approach uniquely combines spectral analysis with cryptographic protection: By transforming model parameters into the frequency domain through Discrete Cosine Transform, <em>DRIFT</em> identifies malicious updates via spectral clustering while inherently obscuring sensitive parameter patterns. This defense mechanism is further reinforced by a privacy-preserving aggregation protocol leveraging fully homomorphic encryption with floating-point computation. It encrypts client updates during transmission and aggregation without compromising their computational usability. Extensive evaluations on MNIST and PathMNIST demonstrate that <em>DRIFT</em> outperforms baseline methods in resisting state-of-the-art Byzantine attacks while maintaining model utility and providing provable privacy guarantees.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"658 ","pages":"Article 131697"},"PeriodicalIF":6.5000,"publicationDate":"2025-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"DRIFT: DCT-based robust and intelligent federated learning with trusted privacy\",\"authors\":\"Qihao Dong ,&nbsp;Yang Bai ,&nbsp;Mang Su ,&nbsp;Yansong Gao ,&nbsp;Anmin Fu\",\"doi\":\"10.1016/j.neucom.2025.131697\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Federated Learning (FL) allows collaborative model training across decentralized clients without sharing private data. However, traditional FL frameworks face dual challenges: vulnerability to Byzantine attacks (where malicious clients submit adversarial model updates) and privacy breaches (where curious clients infer sensitive information from exchanged parameters), exacerbated by decentralized operations and unencrypted communications. While existing work addresses robustness or privacy individually, the interplay between defense mechanisms, particularly the trade-off between attack resilience and utility degradation caused by privacy safeguards, remains understudied. To bridge this gap, we propose <em>DRIFT</em>, a novel FL framework that simultaneously achieves Byzantine robustness and privacy preservation. Our approach uniquely combines spectral analysis with cryptographic protection: By transforming model parameters into the frequency domain through Discrete Cosine Transform, <em>DRIFT</em> identifies malicious updates via spectral clustering while inherently obscuring sensitive parameter patterns. This defense mechanism is further reinforced by a privacy-preserving aggregation protocol leveraging fully homomorphic encryption with floating-point computation. It encrypts client updates during transmission and aggregation without compromising their computational usability. Extensive evaluations on MNIST and PathMNIST demonstrate that <em>DRIFT</em> outperforms baseline methods in resisting state-of-the-art Byzantine attacks while maintaining model utility and providing provable privacy guarantees.</div></div>\",\"PeriodicalId\":19268,\"journal\":{\"name\":\"Neurocomputing\",\"volume\":\"658 \",\"pages\":\"Article 131697\"},\"PeriodicalIF\":6.5000,\"publicationDate\":\"2025-09-30\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Neurocomputing\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0925231225023690\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neurocomputing","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0925231225023690","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

摘要

联邦学习(FL)允许在不共享私有数据的情况下跨分散的客户端进行协作模型训练。然而,传统的FL框架面临双重挑战:拜占庭式攻击(恶意客户端提交对敌模型更新)和隐私泄露(好奇的客户端从交换的参数中推断敏感信息)的脆弱性,分散的操作和未加密的通信加剧了这一点。虽然现有的工作分别解决鲁棒性或隐私,但防御机制之间的相互作用,特别是攻击弹性和隐私保护引起的效用退化之间的权衡,仍未得到充分研究。为了弥补这一差距,我们提出了DRIFT,这是一个同时实现拜占庭鲁棒性和隐私保护的新颖FL框架。我们的方法独特地将频谱分析与加密保护相结合:通过离散余弦变换将模型参数转换到频域,DRIFT通过频谱聚类识别恶意更新,同时固有地模糊敏感参数模式。这种防御机制通过保护隐私的聚合协议进一步加强,该协议利用浮点计算的完全同态加密。它在传输和聚合期间对客户端更新进行加密,而不会影响其计算可用性。对MNIST和PathMNIST的广泛评估表明,DRIFT在抵抗最先进的拜占庭攻击方面优于基线方法,同时保持模型效用并提供可证明的隐私保证。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
DRIFT: DCT-based robust and intelligent federated learning with trusted privacy
Federated Learning (FL) allows collaborative model training across decentralized clients without sharing private data. However, traditional FL frameworks face dual challenges: vulnerability to Byzantine attacks (where malicious clients submit adversarial model updates) and privacy breaches (where curious clients infer sensitive information from exchanged parameters), exacerbated by decentralized operations and unencrypted communications. While existing work addresses robustness or privacy individually, the interplay between defense mechanisms, particularly the trade-off between attack resilience and utility degradation caused by privacy safeguards, remains understudied. To bridge this gap, we propose DRIFT, a novel FL framework that simultaneously achieves Byzantine robustness and privacy preservation. Our approach uniquely combines spectral analysis with cryptographic protection: By transforming model parameters into the frequency domain through Discrete Cosine Transform, DRIFT identifies malicious updates via spectral clustering while inherently obscuring sensitive parameter patterns. This defense mechanism is further reinforced by a privacy-preserving aggregation protocol leveraging fully homomorphic encryption with floating-point computation. It encrypts client updates during transmission and aggregation without compromising their computational usability. Extensive evaluations on MNIST and PathMNIST demonstrate that DRIFT outperforms baseline methods in resisting state-of-the-art Byzantine attacks while maintaining model utility and providing provable privacy guarantees.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Neurocomputing
Neurocomputing 工程技术-计算机:人工智能
CiteScore
13.10
自引率
10.00%
发文量
1382
审稿时长
70 days
期刊介绍: Neurocomputing publishes articles describing recent fundamental contributions in the field of neurocomputing. Neurocomputing theory, practice and applications are the essential topics being covered.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信