{"title":"Analyzing and Enhancing LDP Perturbation Mechanisms in Federated Learning","authors":"Jiawei Duan;Qingqing Ye;Haibo Hu;Xinyue Sun","doi":"10.1109/TKDE.2025.3580796","DOIUrl":null,"url":null,"abstract":"Recently, federated learning (FL) has become a prevalent algorithm to harvest data while preserving privacy. However, private information can still be compromised by local parameters during transmissions between local parties and the central server. To address this problem, local differential privacy (LDP) has been adopted. Known as federated LDP-SGD, each local device only sends perturbed parameters to the central server. However, due to the low model efficiency caused by overwhelming LDP noise, only a relaxed LDP privacy scheme, namely Gaussian mechanism, is explored in the federated LDP-SGD literature. The objective of this paper is to enable other LDP mechanisms (e.g., Laplace, Piecewise, Square Wave and Gaussian) in federated learning by enhancing their model efficiency. We first propose an analytical framework that generalizes federated LDP-SGD and derives its model efficiency. Serving as a benchmark, this framework can compare performances of different LDP mechanisms in federated learning. Based on this framework, we identify a new perspective to generally optimize federated LDP-SGD, namely, the vectorized perturbation strategy <italic>LDPVec</i>. By only perturbing the direction of a gradient, <italic>LDPVec</i> better preserves the descending direction of the gradient, which consequently leads to comprehensive efficiency improvements in terms of various LDP mechanisms.","PeriodicalId":13496,"journal":{"name":"IEEE Transactions on Knowledge and Data Engineering","volume":"37 10","pages":"5767-5780"},"PeriodicalIF":10.4000,"publicationDate":"2025-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Knowledge and Data Engineering","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/11039646/","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Recently, federated learning (FL) has become a prevalent algorithm to harvest data while preserving privacy. However, private information can still be compromised by local parameters during transmissions between local parties and the central server. To address this problem, local differential privacy (LDP) has been adopted. Known as federated LDP-SGD, each local device only sends perturbed parameters to the central server. However, due to the low model efficiency caused by overwhelming LDP noise, only a relaxed LDP privacy scheme, namely Gaussian mechanism, is explored in the federated LDP-SGD literature. The objective of this paper is to enable other LDP mechanisms (e.g., Laplace, Piecewise, Square Wave and Gaussian) in federated learning by enhancing their model efficiency. We first propose an analytical framework that generalizes federated LDP-SGD and derives its model efficiency. Serving as a benchmark, this framework can compare performances of different LDP mechanisms in federated learning. Based on this framework, we identify a new perspective to generally optimize federated LDP-SGD, namely, the vectorized perturbation strategy LDPVec. By only perturbing the direction of a gradient, LDPVec better preserves the descending direction of the gradient, which consequently leads to comprehensive efficiency improvements in terms of various LDP mechanisms.
期刊介绍:
The IEEE Transactions on Knowledge and Data Engineering encompasses knowledge and data engineering aspects within computer science, artificial intelligence, electrical engineering, computer engineering, and related fields. It provides an interdisciplinary platform for disseminating new developments in knowledge and data engineering and explores the practicality of these concepts in both hardware and software. Specific areas covered include knowledge-based and expert systems, AI techniques for knowledge and data management, tools, and methodologies, distributed processing, real-time systems, architectures, data management practices, database design, query languages, security, fault tolerance, statistical databases, algorithms, performance evaluation, and applications.