{"title":"MOFDRNet: A Model for Data Leakage Attacks in Federated Learning","authors":"Yaru Zhao, Jianbiao Zhang, Yihao Cao, Xianqun Han","doi":"10.1002/cpe.70032","DOIUrl":null,"url":null,"abstract":"<div>\n \n <p>Federated Learning allows multiple clients to train local models and aggregate them on the server side. The client is invisible to the shared global model generated by the server, which provides an opportunity for malicious attackers to utilize the inherent vulnerability of federated learning to initiate data leakage attacks. Existing attack techniques are largely <i>client-based</i> and focus on <i>inferring model parameters directly</i>, but do not work for server-based attacks, mainly due to differences in their ability to generalize attacks. Yet few robust data leakage attacks toward federated learning vulnerability have been developed on the server side. To address the above problem, we propose MOFDRNet, a <i>Multi-Objective Fake Data Regression Network</i> model that integrates the loss function and multiple metrics strategies. The key idea is to deploy a malicious attack model on the server with the purpose of generating fake data and labels and continuously approximating the shared gradients between clients and the server, thereby recovering clients' private data. Experimental results demonstrate that the MOFDRNet model has significant advantages in implementing data leakage attacks. Finally, we also discuss the differential privacy defense approach in this study.</p>\n </div>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"37 9-11","pages":""},"PeriodicalIF":1.5000,"publicationDate":"2025-04-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Concurrency and Computation-Practice & Experience","FirstCategoryId":"94","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/cpe.70032","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, SOFTWARE ENGINEERING","Score":null,"Total":0}
引用次数: 0
Abstract
Federated Learning allows multiple clients to train local models and aggregate them on the server side. The client is invisible to the shared global model generated by the server, which provides an opportunity for malicious attackers to utilize the inherent vulnerability of federated learning to initiate data leakage attacks. Existing attack techniques are largely client-based and focus on inferring model parameters directly, but do not work for server-based attacks, mainly due to differences in their ability to generalize attacks. Yet few robust data leakage attacks toward federated learning vulnerability have been developed on the server side. To address the above problem, we propose MOFDRNet, a Multi-Objective Fake Data Regression Network model that integrates the loss function and multiple metrics strategies. The key idea is to deploy a malicious attack model on the server with the purpose of generating fake data and labels and continuously approximating the shared gradients between clients and the server, thereby recovering clients' private data. Experimental results demonstrate that the MOFDRNet model has significant advantages in implementing data leakage attacks. Finally, we also discuss the differential privacy defense approach in this study.
期刊介绍:
Concurrency and Computation: Practice and Experience (CCPE) publishes high-quality, original research papers, and authoritative research review papers, in the overlapping fields of:
Parallel and distributed computing;
High-performance computing;
Computational and data science;
Artificial intelligence and machine learning;
Big data applications, algorithms, and systems;
Network science;
Ontologies and semantics;
Security and privacy;
Cloud/edge/fog computing;
Green computing; and
Quantum computing.