基于高斯差分隐私的分层联邦学习

Tao Zhou
{"title":"基于高斯差分隐私的分层联邦学习","authors":"Tao Zhou","doi":"10.1145/3573834.3574544","DOIUrl":null,"url":null,"abstract":"Federated learning is a privacy preserving machine learning technology. Each participant can build the model without disclosing the underlying data, and only shares the weight update and gradient information of the model with the server. However, a lot of work shows that the attackers can easily obtain the client’s contributions and the relevant privacy training data from the public shared gradient, so the gradient exchange is no longer safe. In order to ensure the security of Federated learning, in the differential privacy method, noise is added to the model update to obscure the contribution of the client, thereby resisting member reasoning attacks, preventing malicious clients from knowing other client information, and ensuring private output. This paper proposes a new differential privacy aggregation scheme, which adopts a more fine-grained hierarchy update strategy. For the first time, the f-differential privacy (f-DP) method is used for the privacy analysis of federated aggregation. Adding Gaussian noise disturbance model update in order to protect the privacy of the client level. We prove that the f-DP differential privacy method improves the previous privacy analysis by experiments. It accurately captures the loss of privacy at every communication round in federal training, and overcome the problem of ensuring privacy at the cost of reducing model utility in most previous work. At the same time, it provides a federal model updating scheme with wider applicability and better utility. When enough users participate in federated learning, the client-level privacy guarantee is achieved while minimizing model loss.","PeriodicalId":345434,"journal":{"name":"Proceedings of the 4th International Conference on Advanced Information Science and System","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-11-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Hierarchical Federated Learning with Gaussian Differential Privacy\",\"authors\":\"Tao Zhou\",\"doi\":\"10.1145/3573834.3574544\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Federated learning is a privacy preserving machine learning technology. Each participant can build the model without disclosing the underlying data, and only shares the weight update and gradient information of the model with the server. However, a lot of work shows that the attackers can easily obtain the client’s contributions and the relevant privacy training data from the public shared gradient, so the gradient exchange is no longer safe. In order to ensure the security of Federated learning, in the differential privacy method, noise is added to the model update to obscure the contribution of the client, thereby resisting member reasoning attacks, preventing malicious clients from knowing other client information, and ensuring private output. This paper proposes a new differential privacy aggregation scheme, which adopts a more fine-grained hierarchy update strategy. For the first time, the f-differential privacy (f-DP) method is used for the privacy analysis of federated aggregation. Adding Gaussian noise disturbance model update in order to protect the privacy of the client level. We prove that the f-DP differential privacy method improves the previous privacy analysis by experiments. It accurately captures the loss of privacy at every communication round in federal training, and overcome the problem of ensuring privacy at the cost of reducing model utility in most previous work. At the same time, it provides a federal model updating scheme with wider applicability and better utility. When enough users participate in federated learning, the client-level privacy guarantee is achieved while minimizing model loss.\",\"PeriodicalId\":345434,\"journal\":{\"name\":\"Proceedings of the 4th International Conference on Advanced Information Science and System\",\"volume\":\"1 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-11-25\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 4th International Conference on Advanced Information Science and System\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3573834.3574544\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 4th International Conference on Advanced Information Science and System","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3573834.3574544","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

摘要

联邦学习是一种保护隐私的机器学习技术。每个参与者都可以在不泄露底层数据的情况下构建模型,只与服务器共享模型的权重更新和梯度信息。然而,大量的工作表明,攻击者可以很容易地从公共共享梯度中获取客户端的贡献和相关的隐私训练数据,因此梯度交换不再安全。为了保证联邦学习的安全性,在差分隐私方法中,在模型更新中加入噪声来掩盖客户端的贡献,从而抵抗成员推理攻击,防止恶意客户端知道其他客户端的信息,保证私有输出。本文提出了一种新的差分隐私聚合方案,该方案采用了更细粒度的层次更新策略。首次将f-差分隐私(f-DP)方法用于联邦聚合的隐私分析。在模型更新中加入高斯噪声干扰,以保护客户端的隐私。通过实验证明,f-DP差分隐私法改进了以往的隐私分析方法。它准确地捕捉了联邦培训中每一轮通信中隐私的丢失,克服了以往大多数工作中以降低模型效用为代价来保证隐私的问题。同时,提供了一种适用性更广、实用性更好的联邦模型更新方案。当足够多的用户参与联邦学习时,就可以在最小化模型损失的同时实现客户端的隐私保证。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Hierarchical Federated Learning with Gaussian Differential Privacy
Federated learning is a privacy preserving machine learning technology. Each participant can build the model without disclosing the underlying data, and only shares the weight update and gradient information of the model with the server. However, a lot of work shows that the attackers can easily obtain the client’s contributions and the relevant privacy training data from the public shared gradient, so the gradient exchange is no longer safe. In order to ensure the security of Federated learning, in the differential privacy method, noise is added to the model update to obscure the contribution of the client, thereby resisting member reasoning attacks, preventing malicious clients from knowing other client information, and ensuring private output. This paper proposes a new differential privacy aggregation scheme, which adopts a more fine-grained hierarchy update strategy. For the first time, the f-differential privacy (f-DP) method is used for the privacy analysis of federated aggregation. Adding Gaussian noise disturbance model update in order to protect the privacy of the client level. We prove that the f-DP differential privacy method improves the previous privacy analysis by experiments. It accurately captures the loss of privacy at every communication round in federal training, and overcome the problem of ensuring privacy at the cost of reducing model utility in most previous work. At the same time, it provides a federal model updating scheme with wider applicability and better utility. When enough users participate in federated learning, the client-level privacy guarantee is achieved while minimizing model loss.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信