Qihao Dong , Yang Bai , Mang Su , Yansong Gao , Anmin Fu
{"title":"DRIFT: DCT-based robust and intelligent federated learning with trusted privacy","authors":"Qihao Dong , Yang Bai , Mang Su , Yansong Gao , Anmin Fu","doi":"10.1016/j.neucom.2025.131697","DOIUrl":null,"url":null,"abstract":"<div><div>Federated Learning (FL) allows collaborative model training across decentralized clients without sharing private data. However, traditional FL frameworks face dual challenges: vulnerability to Byzantine attacks (where malicious clients submit adversarial model updates) and privacy breaches (where curious clients infer sensitive information from exchanged parameters), exacerbated by decentralized operations and unencrypted communications. While existing work addresses robustness or privacy individually, the interplay between defense mechanisms, particularly the trade-off between attack resilience and utility degradation caused by privacy safeguards, remains understudied. To bridge this gap, we propose <em>DRIFT</em>, a novel FL framework that simultaneously achieves Byzantine robustness and privacy preservation. Our approach uniquely combines spectral analysis with cryptographic protection: By transforming model parameters into the frequency domain through Discrete Cosine Transform, <em>DRIFT</em> identifies malicious updates via spectral clustering while inherently obscuring sensitive parameter patterns. This defense mechanism is further reinforced by a privacy-preserving aggregation protocol leveraging fully homomorphic encryption with floating-point computation. It encrypts client updates during transmission and aggregation without compromising their computational usability. Extensive evaluations on MNIST and PathMNIST demonstrate that <em>DRIFT</em> outperforms baseline methods in resisting state-of-the-art Byzantine attacks while maintaining model utility and providing provable privacy guarantees.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"658 ","pages":"Article 131697"},"PeriodicalIF":6.5000,"publicationDate":"2025-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neurocomputing","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0925231225023690","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Federated Learning (FL) allows collaborative model training across decentralized clients without sharing private data. However, traditional FL frameworks face dual challenges: vulnerability to Byzantine attacks (where malicious clients submit adversarial model updates) and privacy breaches (where curious clients infer sensitive information from exchanged parameters), exacerbated by decentralized operations and unencrypted communications. While existing work addresses robustness or privacy individually, the interplay between defense mechanisms, particularly the trade-off between attack resilience and utility degradation caused by privacy safeguards, remains understudied. To bridge this gap, we propose DRIFT, a novel FL framework that simultaneously achieves Byzantine robustness and privacy preservation. Our approach uniquely combines spectral analysis with cryptographic protection: By transforming model parameters into the frequency domain through Discrete Cosine Transform, DRIFT identifies malicious updates via spectral clustering while inherently obscuring sensitive parameter patterns. This defense mechanism is further reinforced by a privacy-preserving aggregation protocol leveraging fully homomorphic encryption with floating-point computation. It encrypts client updates during transmission and aggregation without compromising their computational usability. Extensive evaluations on MNIST and PathMNIST demonstrate that DRIFT outperforms baseline methods in resisting state-of-the-art Byzantine attacks while maintaining model utility and providing provable privacy guarantees.
期刊介绍:
Neurocomputing publishes articles describing recent fundamental contributions in the field of neurocomputing. Neurocomputing theory, practice and applications are the essential topics being covered.