{"title":"分析和改进差异化私有联合学习:模型鲁棒性视角","authors":"Shuaishuai Zhang;Jie Huang;Peihao Li","doi":"10.1109/TIFS.2024.3518058","DOIUrl":null,"url":null,"abstract":"Differentially Private Federated learning (DPFL) applies differential privacy (DP) techniques to preserve clients’ privacy in Federated Learning (FL). Existing methods based on Gaussian Mechanism require the operations of model updates clipping and noise injection, which lead to a serious degradation in model accuracies. Several improved methods are proposed to mitigate the accuracy degradation by decreasing the scale of the injected noise. Different from previous methods, we firstly propose to enhance the model robustness against the DP noise for the accuracy improvement. In this paper, we develop a novel FL scheme with improved model robustness, called FedIMR, which can provide the client-level DP guarantee while maintaining a high model accuracy. We find that the injected noise leads to the fluctuation of loss values in the local training, hindering the model convergence seriously. This motivates us to improve the model robustness for narrowing down the bias of model outputs caused by the noise. The model robustness is evaluated with the signal-to-noise ratio (SNR) of each layer’s outputs. Two techniques are proposed to improve the output SNR, including the logit vector normalization (LVN) and dynamic clipping threshold (DCT). Specifically, LVN normalizes the logit vertor to make the optimization algorithm keep increasing the model output, which is the signal item of the output SNR. DCT dynamically adjusts the clipping threshold to reduce the noise item of the output SNR. We also provide the privacy analysis and convergence results. Experiments are conducted over three famous datasets to evaluate the effectiveness of our method. Both the theoretical results and empirical experiments confirm that our FedIMR can achieve a better accuracy-privacy tradeoff than previous methods.","PeriodicalId":13492,"journal":{"name":"IEEE Transactions on Information Forensics and Security","volume":"20 ","pages":"807-821"},"PeriodicalIF":6.3000,"publicationDate":"2024-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Analyze and Improve Differentially Private Federated Learning: A Model Robustness Perspective\",\"authors\":\"Shuaishuai Zhang;Jie Huang;Peihao Li\",\"doi\":\"10.1109/TIFS.2024.3518058\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Differentially Private Federated learning (DPFL) applies differential privacy (DP) techniques to preserve clients’ privacy in Federated Learning (FL). Existing methods based on Gaussian Mechanism require the operations of model updates clipping and noise injection, which lead to a serious degradation in model accuracies. Several improved methods are proposed to mitigate the accuracy degradation by decreasing the scale of the injected noise. Different from previous methods, we firstly propose to enhance the model robustness against the DP noise for the accuracy improvement. In this paper, we develop a novel FL scheme with improved model robustness, called FedIMR, which can provide the client-level DP guarantee while maintaining a high model accuracy. We find that the injected noise leads to the fluctuation of loss values in the local training, hindering the model convergence seriously. This motivates us to improve the model robustness for narrowing down the bias of model outputs caused by the noise. The model robustness is evaluated with the signal-to-noise ratio (SNR) of each layer’s outputs. Two techniques are proposed to improve the output SNR, including the logit vector normalization (LVN) and dynamic clipping threshold (DCT). Specifically, LVN normalizes the logit vertor to make the optimization algorithm keep increasing the model output, which is the signal item of the output SNR. DCT dynamically adjusts the clipping threshold to reduce the noise item of the output SNR. We also provide the privacy analysis and convergence results. Experiments are conducted over three famous datasets to evaluate the effectiveness of our method. Both the theoretical results and empirical experiments confirm that our FedIMR can achieve a better accuracy-privacy tradeoff than previous methods.\",\"PeriodicalId\":13492,\"journal\":{\"name\":\"IEEE Transactions on Information Forensics and Security\",\"volume\":\"20 \",\"pages\":\"807-821\"},\"PeriodicalIF\":6.3000,\"publicationDate\":\"2024-12-16\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Transactions on Information Forensics and Security\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10802990/\",\"RegionNum\":1,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, THEORY & METHODS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Information Forensics and Security","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10802990/","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, THEORY & METHODS","Score":null,"Total":0}
Analyze and Improve Differentially Private Federated Learning: A Model Robustness Perspective
Differentially Private Federated learning (DPFL) applies differential privacy (DP) techniques to preserve clients’ privacy in Federated Learning (FL). Existing methods based on Gaussian Mechanism require the operations of model updates clipping and noise injection, which lead to a serious degradation in model accuracies. Several improved methods are proposed to mitigate the accuracy degradation by decreasing the scale of the injected noise. Different from previous methods, we firstly propose to enhance the model robustness against the DP noise for the accuracy improvement. In this paper, we develop a novel FL scheme with improved model robustness, called FedIMR, which can provide the client-level DP guarantee while maintaining a high model accuracy. We find that the injected noise leads to the fluctuation of loss values in the local training, hindering the model convergence seriously. This motivates us to improve the model robustness for narrowing down the bias of model outputs caused by the noise. The model robustness is evaluated with the signal-to-noise ratio (SNR) of each layer’s outputs. Two techniques are proposed to improve the output SNR, including the logit vector normalization (LVN) and dynamic clipping threshold (DCT). Specifically, LVN normalizes the logit vertor to make the optimization algorithm keep increasing the model output, which is the signal item of the output SNR. DCT dynamically adjusts the clipping threshold to reduce the noise item of the output SNR. We also provide the privacy analysis and convergence results. Experiments are conducted over three famous datasets to evaluate the effectiveness of our method. Both the theoretical results and empirical experiments confirm that our FedIMR can achieve a better accuracy-privacy tradeoff than previous methods.
期刊介绍:
The IEEE Transactions on Information Forensics and Security covers the sciences, technologies, and applications relating to information forensics, information security, biometrics, surveillance and systems applications that incorporate these features