{"title":"Local differential privacy federated learning based on heterogeneous data multi-privacy mechanism","authors":"Jie Wang, Zhiju Zhang, Jing Tian, Hongtao Li","doi":"10.1016/j.comnet.2024.110822","DOIUrl":null,"url":null,"abstract":"<div><div>Federated learning enables the development of robust models without accessing users data directly. However, recent studies indicate that federated learning remains vulnerable to privacy leakage. To address this issue, local differential privacy mechanisms have been incorporated into federated learning. Nevertheless, local differential privacy will reduce the availability of data. To explore the balance between privacy budgets and data availability in federated learning, we propose federated learning for clustering hierarchical aggregation with adaptive piecewise mechanisms under multiple privacy-FedAPCA as a way to balance the relationship between privacy preservation and model accuracy. First, we introduce an adaptive piecewise mechanism that dynamically adjusts perturbation intervals based on the data ranges across different layers of the model, ensuring minimized perturbation variance while maintaining the same level of privacy. Second, we propose two dynamic privacy budget allocation methods, which are allocating the privacy budget based on global accuracy and global loss, and allocating the privacy budget based on local accuracy and loss, to ensure that better model accuracy can be achieved under the same privacy budget. Finally, we propose a clustering hierarchical aggregation method in the model aggregation stage, and the model is updated and aggregated after the unbiased estimation of the disturbance in each cluster according to the variance of each layer. FedAPCA improves the balance between privacy preservation and model accuracy. Our experimental results, comparing FedAPCA with the SOTA multi-privacy local differential privacy federated learning frameworks on the MNIST and CIFAR-10 datasets, demonstrate that FedAPCA improves model accuracy by 1%–2%.</div></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":null,"pages":null},"PeriodicalIF":4.4000,"publicationDate":"2024-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computer Networks","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1389128624006546","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE","Score":null,"Total":0}
引用次数: 0
Abstract
Federated learning enables the development of robust models without accessing users data directly. However, recent studies indicate that federated learning remains vulnerable to privacy leakage. To address this issue, local differential privacy mechanisms have been incorporated into federated learning. Nevertheless, local differential privacy will reduce the availability of data. To explore the balance between privacy budgets and data availability in federated learning, we propose federated learning for clustering hierarchical aggregation with adaptive piecewise mechanisms under multiple privacy-FedAPCA as a way to balance the relationship between privacy preservation and model accuracy. First, we introduce an adaptive piecewise mechanism that dynamically adjusts perturbation intervals based on the data ranges across different layers of the model, ensuring minimized perturbation variance while maintaining the same level of privacy. Second, we propose two dynamic privacy budget allocation methods, which are allocating the privacy budget based on global accuracy and global loss, and allocating the privacy budget based on local accuracy and loss, to ensure that better model accuracy can be achieved under the same privacy budget. Finally, we propose a clustering hierarchical aggregation method in the model aggregation stage, and the model is updated and aggregated after the unbiased estimation of the disturbance in each cluster according to the variance of each layer. FedAPCA improves the balance between privacy preservation and model accuracy. Our experimental results, comparing FedAPCA with the SOTA multi-privacy local differential privacy federated learning frameworks on the MNIST and CIFAR-10 datasets, demonstrate that FedAPCA improves model accuracy by 1%–2%.
期刊介绍:
Computer Networks is an international, archival journal providing a publication vehicle for complete coverage of all topics of interest to those involved in the computer communications networking area. The audience includes researchers, managers and operators of networks as well as designers and implementors. The Editorial Board will consider any material for publication that is of interest to those groups.