Bin Pu , Zhizhi Liu , Liwen Wu , Kai Xu , Bocheng Liang , Ziyang He , Benteng Ma , Lei Zhao
{"title":"CGGL: A client-side generative gradient leakage attack with double diffusion prior","authors":"Bin Pu , Zhizhi Liu , Liwen Wu , Kai Xu , Bocheng Liang , Ziyang He , Benteng Ma , Lei Zhao","doi":"10.1016/j.inffus.2025.103292","DOIUrl":null,"url":null,"abstract":"<div><div>Federated learning (FL) has emerged as a widely adopted privacy-preserving distributed framework that facilitates information fusion and model training across multiple clients without requiring direct data sharing with a central server. Despite its advantages, recent studies have revealed that FL is vulnerable to gradient inversion attacks, wherein adversaries can reconstruct clients’ private training data from shared gradients. These existing attacks often assumed typically unrealistic in practical FL deployments. In real-world scenarios, malicious clients are more likely to initiate such attacks. In this paper, we propose a novel <strong><em><u>C</u></em></strong>lient-side <strong><em><u>G</u></em></strong>enerative <strong><em><u>G</u></em></strong>radient <strong><em><u>L</u></em></strong>eakage (<strong>CGGL</strong>) attack tailored for FL-based information fusion scenarios. Our approach targets gradient inversion attacks originating from clients and introduces an adaptive poisoning strategy. By utilizing poisoned gradients in the local updates, a malicious client can stealthily embed the target gradients into the aggregated global model updates, enabling the reconstruction of private data from the aggregated gradients. To enhance the effectiveness of the attack, we further develop a reconstruction framework based on a conditional diffusion model incorporating dual diffusion priors. This design significantly improves image reconstruction fidelity, particularly under larger batch sizes and on high-resolution datasets. We validate the proposed CGGL method through extensive experiments on both natural and medical imaging datasets. Results demonstrate that CGGL consistently outperforms existing client-side gradient inversion attacks, achieving pixel-level data reconstruction and revealing substantial privacy risks in FL-enabled information fusion systems—even in the presence of various defense mechanisms.</div></div>","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":"123 ","pages":"Article 103292"},"PeriodicalIF":14.7000,"publicationDate":"2025-05-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Information Fusion","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1566253525003653","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Federated learning (FL) has emerged as a widely adopted privacy-preserving distributed framework that facilitates information fusion and model training across multiple clients without requiring direct data sharing with a central server. Despite its advantages, recent studies have revealed that FL is vulnerable to gradient inversion attacks, wherein adversaries can reconstruct clients’ private training data from shared gradients. These existing attacks often assumed typically unrealistic in practical FL deployments. In real-world scenarios, malicious clients are more likely to initiate such attacks. In this paper, we propose a novel Client-side Generative Gradient Leakage (CGGL) attack tailored for FL-based information fusion scenarios. Our approach targets gradient inversion attacks originating from clients and introduces an adaptive poisoning strategy. By utilizing poisoned gradients in the local updates, a malicious client can stealthily embed the target gradients into the aggregated global model updates, enabling the reconstruction of private data from the aggregated gradients. To enhance the effectiveness of the attack, we further develop a reconstruction framework based on a conditional diffusion model incorporating dual diffusion priors. This design significantly improves image reconstruction fidelity, particularly under larger batch sizes and on high-resolution datasets. We validate the proposed CGGL method through extensive experiments on both natural and medical imaging datasets. Results demonstrate that CGGL consistently outperforms existing client-side gradient inversion attacks, achieving pixel-level data reconstruction and revealing substantial privacy risks in FL-enabled information fusion systems—even in the presence of various defense mechanisms.
期刊介绍:
Information Fusion serves as a central platform for showcasing advancements in multi-sensor, multi-source, multi-process information fusion, fostering collaboration among diverse disciplines driving its progress. It is the leading outlet for sharing research and development in this field, focusing on architectures, algorithms, and applications. Papers dealing with fundamental theoretical analyses as well as those demonstrating their application to real-world problems will be welcome.