Local differential privacy federated learning based on heterogeneous data multi-privacy mechanism

IF 4.4 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE
Jie Wang, Zhiju Zhang, Jing Tian, Hongtao Li
{"title":"Local differential privacy federated learning based on heterogeneous data multi-privacy mechanism","authors":"Jie Wang,&nbsp;Zhiju Zhang,&nbsp;Jing Tian,&nbsp;Hongtao Li","doi":"10.1016/j.comnet.2024.110822","DOIUrl":null,"url":null,"abstract":"<div><div>Federated learning enables the development of robust models without accessing users data directly. However, recent studies indicate that federated learning remains vulnerable to privacy leakage. To address this issue, local differential privacy mechanisms have been incorporated into federated learning. Nevertheless, local differential privacy will reduce the availability of data. To explore the balance between privacy budgets and data availability in federated learning, we propose federated learning for clustering hierarchical aggregation with adaptive piecewise mechanisms under multiple privacy-FedAPCA as a way to balance the relationship between privacy preservation and model accuracy. First, we introduce an adaptive piecewise mechanism that dynamically adjusts perturbation intervals based on the data ranges across different layers of the model, ensuring minimized perturbation variance while maintaining the same level of privacy. Second, we propose two dynamic privacy budget allocation methods, which are allocating the privacy budget based on global accuracy and global loss, and allocating the privacy budget based on local accuracy and loss, to ensure that better model accuracy can be achieved under the same privacy budget. Finally, we propose a clustering hierarchical aggregation method in the model aggregation stage, and the model is updated and aggregated after the unbiased estimation of the disturbance in each cluster according to the variance of each layer. FedAPCA improves the balance between privacy preservation and model accuracy. Our experimental results, comparing FedAPCA with the SOTA multi-privacy local differential privacy federated learning frameworks on the MNIST and CIFAR-10 datasets, demonstrate that FedAPCA improves model accuracy by 1%–2%.</div></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":null,"pages":null},"PeriodicalIF":4.4000,"publicationDate":"2024-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computer Networks","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1389128624006546","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE","Score":null,"Total":0}
引用次数: 0

Abstract

Federated learning enables the development of robust models without accessing users data directly. However, recent studies indicate that federated learning remains vulnerable to privacy leakage. To address this issue, local differential privacy mechanisms have been incorporated into federated learning. Nevertheless, local differential privacy will reduce the availability of data. To explore the balance between privacy budgets and data availability in federated learning, we propose federated learning for clustering hierarchical aggregation with adaptive piecewise mechanisms under multiple privacy-FedAPCA as a way to balance the relationship between privacy preservation and model accuracy. First, we introduce an adaptive piecewise mechanism that dynamically adjusts perturbation intervals based on the data ranges across different layers of the model, ensuring minimized perturbation variance while maintaining the same level of privacy. Second, we propose two dynamic privacy budget allocation methods, which are allocating the privacy budget based on global accuracy and global loss, and allocating the privacy budget based on local accuracy and loss, to ensure that better model accuracy can be achieved under the same privacy budget. Finally, we propose a clustering hierarchical aggregation method in the model aggregation stage, and the model is updated and aggregated after the unbiased estimation of the disturbance in each cluster according to the variance of each layer. FedAPCA improves the balance between privacy preservation and model accuracy. Our experimental results, comparing FedAPCA with the SOTA multi-privacy local differential privacy federated learning frameworks on the MNIST and CIFAR-10 datasets, demonstrate that FedAPCA improves model accuracy by 1%–2%.
基于异构数据多隐私机制的本地差分隐私联合学习
联盟学习可以在不直接访问用户数据的情况下开发强大的模型。然而,最近的研究表明,联合学习仍然容易造成隐私泄露。为了解决这个问题,人们在联合学习中加入了本地差异隐私机制。然而,局部差异隐私会降低数据的可用性。为了探索联合学习中隐私预算和数据可用性之间的平衡,我们提出了在多重隐私--FedAPCA 下具有自适应片断机制的聚类分层聚合联合学习,以此来平衡隐私保护和模型准确性之间的关系。首先,我们引入了一种自适应分片机制,该机制可根据模型不同层的数据范围动态调整扰动间隔,在确保扰动方差最小化的同时保持相同的隐私水平。其次,我们提出了两种动态隐私预算分配方法,即根据全局精度和全局损失分配隐私预算,以及根据局部精度和损失分配隐私预算,以确保在相同的隐私预算下实现更好的模型精度。最后,我们在模型聚合阶段提出了聚类分层聚合方法,根据各层的方差对每个聚类中的干扰进行无偏估计后,对模型进行更新和聚合。FedAPCA 改善了隐私保护和模型准确性之间的平衡。我们在 MNIST 和 CIFAR-10 数据集上比较了 FedAPCA 与 SOTA 多隐私局部差分隐私联合学习框架,实验结果表明,FedAPCA 可将模型准确率提高 1%-2%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Computer Networks
Computer Networks 工程技术-电信学
CiteScore
10.80
自引率
3.60%
发文量
434
审稿时长
8.6 months
期刊介绍: Computer Networks is an international, archival journal providing a publication vehicle for complete coverage of all topics of interest to those involved in the computer communications networking area. The audience includes researchers, managers and operators of networks as well as designers and implementors. The Editorial Board will consider any material for publication that is of interest to those groups.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信