{"title":"Exploring the Privacy-Accuracy Trade-Off Using Adaptive Gradient Clipping in Federated Learning","authors":"Benteng Zhang;Yingchi Mao;Xiaoming He;Ping Ping;Huawei Huang;Jie Wu","doi":"10.1109/TNSE.2025.3546777","DOIUrl":null,"url":null,"abstract":"In Differentially Private Federated Learning (DP-FL), gradient clipping can prevent excessive noise from being added to the gradient and ensure that the impact of noise is within a controllable range. However, state-of-the-art methods adopt fixed or imprecise clipping thresholds for gradient clipping, which is not adaptive to the changes in the gradients. This issue can lead to a significant degradation in accuracy while training the global model. To this end, we propose Differential Privacy Federated Adaptive gradient Clipping based on gradient Norm (DP-FedACN). DP-FedACN can calculate the decay rate of the clipping threshold by considering the overall changing trend of the gradient norm. Furthermore, DP-FedACN can accurately adjust the clipping threshold for each training round according to the actual changes in gradient norm, clipping loss, and decay rate. Experimental results demonstrate that DP-FedACN can maintain privacy protection performance similar to that of DP-FedAvg under member inference attacks and model inversion attacks. DP-FedACN significantly outperforms DP-FedAGNC and DP-FedDDC in privacy protection metrics. Additionally, the test accuracy of DP-FedACN is approximately 2.61%, 1.01%, and 1.03% higher than the other three baseline methods, respectively. DP-FedACN can improve the global model training accuracy while ensuring the privacy protection of the model. All experimental results demonstrate that the proposed DP-FedACN can help find a fine-grained privacy-accuracy trade-off in DP-FL.","PeriodicalId":54229,"journal":{"name":"IEEE Transactions on Network Science and Engineering","volume":"12 3","pages":"2254-2265"},"PeriodicalIF":6.7000,"publicationDate":"2025-02-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Network Science and Engineering","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10908083/","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, MULTIDISCIPLINARY","Score":null,"Total":0}
引用次数: 0
Abstract
In Differentially Private Federated Learning (DP-FL), gradient clipping can prevent excessive noise from being added to the gradient and ensure that the impact of noise is within a controllable range. However, state-of-the-art methods adopt fixed or imprecise clipping thresholds for gradient clipping, which is not adaptive to the changes in the gradients. This issue can lead to a significant degradation in accuracy while training the global model. To this end, we propose Differential Privacy Federated Adaptive gradient Clipping based on gradient Norm (DP-FedACN). DP-FedACN can calculate the decay rate of the clipping threshold by considering the overall changing trend of the gradient norm. Furthermore, DP-FedACN can accurately adjust the clipping threshold for each training round according to the actual changes in gradient norm, clipping loss, and decay rate. Experimental results demonstrate that DP-FedACN can maintain privacy protection performance similar to that of DP-FedAvg under member inference attacks and model inversion attacks. DP-FedACN significantly outperforms DP-FedAGNC and DP-FedDDC in privacy protection metrics. Additionally, the test accuracy of DP-FedACN is approximately 2.61%, 1.01%, and 1.03% higher than the other three baseline methods, respectively. DP-FedACN can improve the global model training accuracy while ensuring the privacy protection of the model. All experimental results demonstrate that the proposed DP-FedACN can help find a fine-grained privacy-accuracy trade-off in DP-FL.
期刊介绍:
The proposed journal, called the IEEE Transactions on Network Science and Engineering (TNSE), is committed to timely publishing of peer-reviewed technical articles that deal with the theory and applications of network science and the interconnections among the elements in a system that form a network. In particular, the IEEE Transactions on Network Science and Engineering publishes articles on understanding, prediction, and control of structures and behaviors of networks at the fundamental level. The types of networks covered include physical or engineered networks, information networks, biological networks, semantic networks, economic networks, social networks, and ecological networks. Aimed at discovering common principles that govern network structures, network functionalities and behaviors of networks, the journal seeks articles on understanding, prediction, and control of structures and behaviors of networks. Another trans-disciplinary focus of the IEEE Transactions on Network Science and Engineering is the interactions between and co-evolution of different genres of networks.