分析和改进差异化私有联合学习:模型鲁棒性视角

IF 6.3 1区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS
Shuaishuai Zhang;Jie Huang;Peihao Li
{"title":"分析和改进差异化私有联合学习:模型鲁棒性视角","authors":"Shuaishuai Zhang;Jie Huang;Peihao Li","doi":"10.1109/TIFS.2024.3518058","DOIUrl":null,"url":null,"abstract":"Differentially Private Federated learning (DPFL) applies differential privacy (DP) techniques to preserve clients’ privacy in Federated Learning (FL). Existing methods based on Gaussian Mechanism require the operations of model updates clipping and noise injection, which lead to a serious degradation in model accuracies. Several improved methods are proposed to mitigate the accuracy degradation by decreasing the scale of the injected noise. Different from previous methods, we firstly propose to enhance the model robustness against the DP noise for the accuracy improvement. In this paper, we develop a novel FL scheme with improved model robustness, called FedIMR, which can provide the client-level DP guarantee while maintaining a high model accuracy. We find that the injected noise leads to the fluctuation of loss values in the local training, hindering the model convergence seriously. This motivates us to improve the model robustness for narrowing down the bias of model outputs caused by the noise. The model robustness is evaluated with the signal-to-noise ratio (SNR) of each layer’s outputs. Two techniques are proposed to improve the output SNR, including the logit vector normalization (LVN) and dynamic clipping threshold (DCT). Specifically, LVN normalizes the logit vertor to make the optimization algorithm keep increasing the model output, which is the signal item of the output SNR. DCT dynamically adjusts the clipping threshold to reduce the noise item of the output SNR. We also provide the privacy analysis and convergence results. Experiments are conducted over three famous datasets to evaluate the effectiveness of our method. Both the theoretical results and empirical experiments confirm that our FedIMR can achieve a better accuracy-privacy tradeoff than previous methods.","PeriodicalId":13492,"journal":{"name":"IEEE Transactions on Information Forensics and Security","volume":"20 ","pages":"807-821"},"PeriodicalIF":6.3000,"publicationDate":"2024-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Analyze and Improve Differentially Private Federated Learning: A Model Robustness Perspective\",\"authors\":\"Shuaishuai Zhang;Jie Huang;Peihao Li\",\"doi\":\"10.1109/TIFS.2024.3518058\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Differentially Private Federated learning (DPFL) applies differential privacy (DP) techniques to preserve clients’ privacy in Federated Learning (FL). Existing methods based on Gaussian Mechanism require the operations of model updates clipping and noise injection, which lead to a serious degradation in model accuracies. Several improved methods are proposed to mitigate the accuracy degradation by decreasing the scale of the injected noise. Different from previous methods, we firstly propose to enhance the model robustness against the DP noise for the accuracy improvement. In this paper, we develop a novel FL scheme with improved model robustness, called FedIMR, which can provide the client-level DP guarantee while maintaining a high model accuracy. We find that the injected noise leads to the fluctuation of loss values in the local training, hindering the model convergence seriously. This motivates us to improve the model robustness for narrowing down the bias of model outputs caused by the noise. The model robustness is evaluated with the signal-to-noise ratio (SNR) of each layer’s outputs. Two techniques are proposed to improve the output SNR, including the logit vector normalization (LVN) and dynamic clipping threshold (DCT). Specifically, LVN normalizes the logit vertor to make the optimization algorithm keep increasing the model output, which is the signal item of the output SNR. DCT dynamically adjusts the clipping threshold to reduce the noise item of the output SNR. We also provide the privacy analysis and convergence results. Experiments are conducted over three famous datasets to evaluate the effectiveness of our method. Both the theoretical results and empirical experiments confirm that our FedIMR can achieve a better accuracy-privacy tradeoff than previous methods.\",\"PeriodicalId\":13492,\"journal\":{\"name\":\"IEEE Transactions on Information Forensics and Security\",\"volume\":\"20 \",\"pages\":\"807-821\"},\"PeriodicalIF\":6.3000,\"publicationDate\":\"2024-12-16\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Transactions on Information Forensics and Security\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10802990/\",\"RegionNum\":1,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, THEORY & METHODS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Information Forensics and Security","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10802990/","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, THEORY & METHODS","Score":null,"Total":0}
引用次数: 0

摘要

差分私有联邦学习(DPFL)在联邦学习(FL)中应用差分隐私(DP)技术来保护客户端的隐私。现有的基于高斯机制的方法需要进行模型更新、裁剪和噪声注入等操作,导致模型精度严重下降。提出了几种改进方法,通过减小注入噪声的尺度来缓解精度下降。与以往的方法不同,我们首先提出增强模型对DP噪声的鲁棒性以提高精度。在本文中,我们开发了一种新的具有改进模型鲁棒性的FL方案,称为FedIMR,它可以在保持高模型精度的同时提供客户端级DP保证。我们发现注入的噪声导致局部训练中损失值的波动,严重阻碍了模型的收敛。这促使我们提高模型鲁棒性,以缩小由噪声引起的模型输出偏差。用每一层输出的信噪比(SNR)来评估模型的鲁棒性。提出了两种提高输出信噪比的技术,分别是logit向量归一化(LVN)和动态裁剪阈值(DCT)。具体来说,LVN对logit向量进行归一化,使优化算法不断增加模型输出,这是输出信噪比的信号项。DCT动态调整裁剪阈值以降低输出信噪比中的噪声项。我们还提供了隐私分析和收敛结果。在三个著名的数据集上进行了实验,以评估我们的方法的有效性。理论结果和实证实验都证实,我们的FedIMR算法比以前的算法在准确性和隐私性之间取得了更好的平衡。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Analyze and Improve Differentially Private Federated Learning: A Model Robustness Perspective
Differentially Private Federated learning (DPFL) applies differential privacy (DP) techniques to preserve clients’ privacy in Federated Learning (FL). Existing methods based on Gaussian Mechanism require the operations of model updates clipping and noise injection, which lead to a serious degradation in model accuracies. Several improved methods are proposed to mitigate the accuracy degradation by decreasing the scale of the injected noise. Different from previous methods, we firstly propose to enhance the model robustness against the DP noise for the accuracy improvement. In this paper, we develop a novel FL scheme with improved model robustness, called FedIMR, which can provide the client-level DP guarantee while maintaining a high model accuracy. We find that the injected noise leads to the fluctuation of loss values in the local training, hindering the model convergence seriously. This motivates us to improve the model robustness for narrowing down the bias of model outputs caused by the noise. The model robustness is evaluated with the signal-to-noise ratio (SNR) of each layer’s outputs. Two techniques are proposed to improve the output SNR, including the logit vector normalization (LVN) and dynamic clipping threshold (DCT). Specifically, LVN normalizes the logit vertor to make the optimization algorithm keep increasing the model output, which is the signal item of the output SNR. DCT dynamically adjusts the clipping threshold to reduce the noise item of the output SNR. We also provide the privacy analysis and convergence results. Experiments are conducted over three famous datasets to evaluate the effectiveness of our method. Both the theoretical results and empirical experiments confirm that our FedIMR can achieve a better accuracy-privacy tradeoff than previous methods.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
IEEE Transactions on Information Forensics and Security
IEEE Transactions on Information Forensics and Security 工程技术-工程:电子与电气
CiteScore
14.40
自引率
7.40%
发文量
234
审稿时长
6.5 months
期刊介绍: The IEEE Transactions on Information Forensics and Security covers the sciences, technologies, and applications relating to information forensics, information security, biometrics, surveillance and systems applications that incorporate these features
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信