Evaluating privacy loss in differential privacy based federated learning

IF 6.2 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS
Shangyin Weng, Yan Gou, Lei Zhang, Muhammad Ali Imran
{"title":"Evaluating privacy loss in differential privacy based federated learning","authors":"Shangyin Weng,&nbsp;Yan Gou,&nbsp;Lei Zhang,&nbsp;Muhammad Ali Imran","doi":"10.1016/j.future.2025.107848","DOIUrl":null,"url":null,"abstract":"<div><div>Federated learning (FL) trains a global model by aggregating local training gradients, but private information can be leaked from these gradients. To enhance privacy, differential privacy (DP) is often used by adding artificial noise. However, this approach reduces accuracy compared to noise-free learning. Balancing privacy protection and model accuracy remains a key challenge for DP-based FL. Additionally, current methods use theoretical bounds to measure privacy loss, lacking an intuitive assessment. In this paper, we first propose an evaluation method for privacy leakage in the FL by utilizing reconstruction attacks to analyze the difference between the original images and reconstructed ones. We then formulate the problems of investigating DP’s effect on the reconstruction attack, where we study the accumulative privacy loss under two different reconstruction attack settings and prove that anonymous local clients can decrease the probability of privacy leakage. Next, we study the effects of different clipping methods, including fixed constants and the median value of the unclipped gradients’ norm, on privacy protection and learning performance. Furthermore, we derive the theoretical convergence analysis for the cosine similarity and <span><math><msub><mrow><mi>l</mi></mrow><mrow><mn>2</mn></mrow></msub></math></span>-norm-based reconstruction attack under DP noise. We conduct extensive simulations to show how DP settings affect privacy leakage and characterize the trade-off between privacy protection and learning accuracy.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"172 ","pages":"Article 107848"},"PeriodicalIF":6.2000,"publicationDate":"2025-04-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Future Generation Computer Systems-The International Journal of Escience","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0167739X25001438","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, THEORY & METHODS","Score":null,"Total":0}
引用次数: 0

Abstract

Federated learning (FL) trains a global model by aggregating local training gradients, but private information can be leaked from these gradients. To enhance privacy, differential privacy (DP) is often used by adding artificial noise. However, this approach reduces accuracy compared to noise-free learning. Balancing privacy protection and model accuracy remains a key challenge for DP-based FL. Additionally, current methods use theoretical bounds to measure privacy loss, lacking an intuitive assessment. In this paper, we first propose an evaluation method for privacy leakage in the FL by utilizing reconstruction attacks to analyze the difference between the original images and reconstructed ones. We then formulate the problems of investigating DP’s effect on the reconstruction attack, where we study the accumulative privacy loss under two different reconstruction attack settings and prove that anonymous local clients can decrease the probability of privacy leakage. Next, we study the effects of different clipping methods, including fixed constants and the median value of the unclipped gradients’ norm, on privacy protection and learning performance. Furthermore, we derive the theoretical convergence analysis for the cosine similarity and l2-norm-based reconstruction attack under DP noise. We conduct extensive simulations to show how DP settings affect privacy leakage and characterize the trade-off between privacy protection and learning accuracy.
基于差分隐私的联邦学习中隐私损失评估
联邦学习(FL)通过聚合局部训练梯度来训练全局模型,但这些梯度会泄露私有信息。为了增强隐私性,差分隐私(DP)通常通过添加人工噪声来实现。然而,与无噪声学习相比,这种方法降低了准确性。平衡隐私保护和模型准确性仍然是基于dp的FL的关键挑战。此外,目前的方法使用理论界限来衡量隐私损失,缺乏直观的评估。在本文中,我们首先提出了一种利用重建攻击来分析原始图像与重建图像之间差异的FL隐私泄漏评估方法。然后,我们制定了DP对重构攻击影响的研究问题,研究了两种不同重构攻击设置下的累计隐私损失,并证明了匿名本地客户端可以降低隐私泄露的概率。接下来,我们研究了不同的裁剪方法,包括固定常数和未裁剪梯度范数的中位数,对隐私保护和学习性能的影响。在此基础上,推导了在DP噪声下余弦相似度和基于12范数的重构攻击的理论收敛性分析。我们进行了大量的模拟,以显示DP设置如何影响隐私泄漏,并表征隐私保护和学习准确性之间的权衡。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
19.90
自引率
2.70%
发文量
376
审稿时长
10.6 months
期刊介绍: Computing infrastructures and systems are constantly evolving, resulting in increasingly complex and collaborative scientific applications. To cope with these advancements, there is a growing need for collaborative tools that can effectively map, control, and execute these applications. Furthermore, with the explosion of Big Data, there is a requirement for innovative methods and infrastructures to collect, analyze, and derive meaningful insights from the vast amount of data generated. This necessitates the integration of computational and storage capabilities, databases, sensors, and human collaboration. Future Generation Computer Systems aims to pioneer advancements in distributed systems, collaborative environments, high-performance computing, and Big Data analytics. It strives to stay at the forefront of developments in grids, clouds, and the Internet of Things (IoT) to effectively address the challenges posed by these wide-area, fully distributed sensing and computing systems.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信