Muhammad Luqman Naseem , Zipeng Ye , Qi Zhou , Wenjian Luo
{"title":"NID:基于神经信息扩散的隐私保护算子","authors":"Muhammad Luqman Naseem , Zipeng Ye , Qi Zhou , Wenjian Luo","doi":"10.1016/j.jisa.2025.104105","DOIUrl":null,"url":null,"abstract":"<div><div>Gradient inversion attacks (GIAs) pose a significant privacy threat to distributed learning paradigms, aiming to reconstruct the victim’s training data with high fidelity through shared gradients. To mitigate this issue, numerous privacy-preserving strategies have been proposed, yet few methods achieve a balance between efficiency, utility and privacy. In this paper, we will explore the limitations of the widely adopted privacy-preserving method in distributed learning, i.e., Local Differential Privacy (LDP), and expose that there is a discrepancy between the conceptualization of privacy budget and its practical application against gradient leakage attacks; simultaneously, we will reveal that under imbalanced data distributions, privacy-preserving methods based on random perturbations inevitably exacerbate the degradation of model performance. To alleviate these issues, we propose a plug-and-play privacy protection method based on Neural Information Diffusion (NID). In our approach, participants in training need only diffuse neural information in an unbiased manner, thus ensuring the privacy through propagatable randomness. We have evaluated our method in privacy-vulnerable scenarios and thoroughly demonstrated its effectiveness in resisting GIAs. Meanwhile, a comprehensive array of experimental configurations robustly shows that NID possesses the capability to balance model utility and privacy.</div></div>","PeriodicalId":48638,"journal":{"name":"Journal of Information Security and Applications","volume":"93 ","pages":"Article 104105"},"PeriodicalIF":3.8000,"publicationDate":"2025-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"NID: A privacy-preserving operator based on Neural Information Diffusion\",\"authors\":\"Muhammad Luqman Naseem , Zipeng Ye , Qi Zhou , Wenjian Luo\",\"doi\":\"10.1016/j.jisa.2025.104105\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Gradient inversion attacks (GIAs) pose a significant privacy threat to distributed learning paradigms, aiming to reconstruct the victim’s training data with high fidelity through shared gradients. To mitigate this issue, numerous privacy-preserving strategies have been proposed, yet few methods achieve a balance between efficiency, utility and privacy. In this paper, we will explore the limitations of the widely adopted privacy-preserving method in distributed learning, i.e., Local Differential Privacy (LDP), and expose that there is a discrepancy between the conceptualization of privacy budget and its practical application against gradient leakage attacks; simultaneously, we will reveal that under imbalanced data distributions, privacy-preserving methods based on random perturbations inevitably exacerbate the degradation of model performance. To alleviate these issues, we propose a plug-and-play privacy protection method based on Neural Information Diffusion (NID). In our approach, participants in training need only diffuse neural information in an unbiased manner, thus ensuring the privacy through propagatable randomness. We have evaluated our method in privacy-vulnerable scenarios and thoroughly demonstrated its effectiveness in resisting GIAs. Meanwhile, a comprehensive array of experimental configurations robustly shows that NID possesses the capability to balance model utility and privacy.</div></div>\",\"PeriodicalId\":48638,\"journal\":{\"name\":\"Journal of Information Security and Applications\",\"volume\":\"93 \",\"pages\":\"Article 104105\"},\"PeriodicalIF\":3.8000,\"publicationDate\":\"2025-06-13\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Information Security and Applications\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S2214212625001425\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, INFORMATION SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Information Security and Applications","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2214212625001425","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
NID: A privacy-preserving operator based on Neural Information Diffusion
Gradient inversion attacks (GIAs) pose a significant privacy threat to distributed learning paradigms, aiming to reconstruct the victim’s training data with high fidelity through shared gradients. To mitigate this issue, numerous privacy-preserving strategies have been proposed, yet few methods achieve a balance between efficiency, utility and privacy. In this paper, we will explore the limitations of the widely adopted privacy-preserving method in distributed learning, i.e., Local Differential Privacy (LDP), and expose that there is a discrepancy between the conceptualization of privacy budget and its practical application against gradient leakage attacks; simultaneously, we will reveal that under imbalanced data distributions, privacy-preserving methods based on random perturbations inevitably exacerbate the degradation of model performance. To alleviate these issues, we propose a plug-and-play privacy protection method based on Neural Information Diffusion (NID). In our approach, participants in training need only diffuse neural information in an unbiased manner, thus ensuring the privacy through propagatable randomness. We have evaluated our method in privacy-vulnerable scenarios and thoroughly demonstrated its effectiveness in resisting GIAs. Meanwhile, a comprehensive array of experimental configurations robustly shows that NID possesses the capability to balance model utility and privacy.
期刊介绍:
Journal of Information Security and Applications (JISA) focuses on the original research and practice-driven applications with relevance to information security and applications. JISA provides a common linkage between a vibrant scientific and research community and industry professionals by offering a clear view on modern problems and challenges in information security, as well as identifying promising scientific and "best-practice" solutions. JISA issues offer a balance between original research work and innovative industrial approaches by internationally renowned information security experts and researchers.