Enforcing Differential Privacy in Federated Learning via Long-Term Contribution Incentives

IF 8 1区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS
Xiangyun Tang;Luyao Peng;Yu Weng;Meng Shen;Liehuang Zhu;Robert H. Deng
{"title":"Enforcing Differential Privacy in Federated Learning via Long-Term Contribution Incentives","authors":"Xiangyun Tang;Luyao Peng;Yu Weng;Meng Shen;Liehuang Zhu;Robert H. Deng","doi":"10.1109/TIFS.2025.3550777","DOIUrl":null,"url":null,"abstract":"Privacy-preserving Federated Learning (FL) based on Differential Privacy (DP) protects clients’ data by adding DP noise to samples’ gradients and has emerged as a de facto standard for data privacy in FL. However, the accuracy of global models in DP-based FL may be reduced significantly when rogue clients occur who deviate from the preset DP-based FL approaches and selfishly inject excessive DP noise beyond expectations, thereby applying a smaller privacy budget in the DP mechanism to ensure a higher level of security. Existing DP-based FL fails to prevent such attacks as they are imperceptible. Under the DP-based FL system and random Gaussian noise, the local model parameters of the rogue clients and the honest clients have identical distributions. In particular, the rogue local models show a low performance, but directly filtering out lower-performance local models compromises the generalizability of global models, as local models trained on scarce data also behave with low performance in the early epoch. In this paper, we propose ReFL, a novel privacy-preserving FL system that enforces DP and avoids the accuracy reduction of global models caused by excessive DP noise of rogue clients. Based on the observation that rogue local models with excessive DP noise and honest local models trained on scarce data have different performance patterns in long-term training epochs, we propose a long-term contribution incentives scheme to evaluate clients’ reputations and identify rogue clients. Furthermore, we design a reputation-based aggregation to avoid the damage of rogue clients’ models on the global model accuracy, based on the incentive reputation. Extensive experiments demonstrate ReFL guarantees the global model accuracy performance 0.77% - 81.71% higher than existing DP-based FL methods in the presence of rogue clients.","PeriodicalId":13492,"journal":{"name":"IEEE Transactions on Information Forensics and Security","volume":"20 ","pages":"3102-3115"},"PeriodicalIF":8.0000,"publicationDate":"2025-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Information Forensics and Security","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10924261/","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, THEORY & METHODS","Score":null,"Total":0}
引用次数: 0

Abstract

Privacy-preserving Federated Learning (FL) based on Differential Privacy (DP) protects clients’ data by adding DP noise to samples’ gradients and has emerged as a de facto standard for data privacy in FL. However, the accuracy of global models in DP-based FL may be reduced significantly when rogue clients occur who deviate from the preset DP-based FL approaches and selfishly inject excessive DP noise beyond expectations, thereby applying a smaller privacy budget in the DP mechanism to ensure a higher level of security. Existing DP-based FL fails to prevent such attacks as they are imperceptible. Under the DP-based FL system and random Gaussian noise, the local model parameters of the rogue clients and the honest clients have identical distributions. In particular, the rogue local models show a low performance, but directly filtering out lower-performance local models compromises the generalizability of global models, as local models trained on scarce data also behave with low performance in the early epoch. In this paper, we propose ReFL, a novel privacy-preserving FL system that enforces DP and avoids the accuracy reduction of global models caused by excessive DP noise of rogue clients. Based on the observation that rogue local models with excessive DP noise and honest local models trained on scarce data have different performance patterns in long-term training epochs, we propose a long-term contribution incentives scheme to evaluate clients’ reputations and identify rogue clients. Furthermore, we design a reputation-based aggregation to avoid the damage of rogue clients’ models on the global model accuracy, based on the incentive reputation. Extensive experiments demonstrate ReFL guarantees the global model accuracy performance 0.77% - 81.71% higher than existing DP-based FL methods in the presence of rogue clients.
通过长期贡献激励实现联邦学习中的差异隐私
基于差分隐私(DP)的隐私保护联邦学习(FL)通过在样本梯度中添加DP噪声来保护客户端的数据,并已成为FL中数据隐私的事实上的标准。然而,当流氓客户偏离预设的基于DP的FL方法并自私地注入超出预期的过多DP噪声时,基于DP的FL中全局模型的准确性可能会显著降低。从而在DP机制中应用较小的隐私预算,以确保更高级别的安全性。现有的基于dp的FL无法阻止这种难以察觉的攻击。在随机高斯噪声条件下,流氓客户端和诚实客户端的局部模型参数具有相同的分布。特别是,流氓局部模型表现出较低的性能,但直接过滤掉性能较低的局部模型会损害全局模型的泛化性,因为在稀缺数据上训练的局部模型在早期也表现出较低的性能。在本文中,我们提出了一种新的隐私保护FL系统ReFL,该系统强制DP,并避免了由于恶意客户端过多的DP噪声而导致的全局模型精度降低。基于观察到具有过多DP噪声的流氓局部模型和基于稀缺数据训练的诚实局部模型在长期训练时期具有不同的绩效模式,我们提出了一种长期贡献激励方案来评估客户声誉并识别流氓客户。此外,我们设计了一个基于声誉的聚合,以避免流氓客户模型对全局模型精度的损害,基于激励声誉。大量实验表明,在存在流氓客户端的情况下,ReFL比现有的基于dp的FL方法保证了0.77% - 81.71%的全局模型精度性能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
IEEE Transactions on Information Forensics and Security
IEEE Transactions on Information Forensics and Security 工程技术-工程:电子与电气
CiteScore
14.40
自引率
7.40%
发文量
234
审稿时长
6.5 months
期刊介绍: The IEEE Transactions on Information Forensics and Security covers the sciences, technologies, and applications relating to information forensics, information security, biometrics, surveillance and systems applications that incorporate these features
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信