Communication Reducing Quantization for Federated Learning with Local Differential Privacy Mechanism

Huixuan Zong, Qing Wang, Xiaofeng Liu, Yinchuan Li, Yunfeng Shao
{"title":"Communication Reducing Quantization for Federated Learning with Local Differential Privacy Mechanism","authors":"Huixuan Zong, Qing Wang, Xiaofeng Liu, Yinchuan Li, Yunfeng Shao","doi":"10.1109/iccc52777.2021.9580315","DOIUrl":null,"url":null,"abstract":"As an emerging framework of distributed learning, federated learning (FL) has been a research focus since it enables clients to train deep learning models collaboratively without exposing their original data. Nevertheless, private information can still be inferred from the communicated model parameters by adversaries. In addition, due to the limited channel bandwidth, the model communication between clients and the server has become a serious bottleneck. In this paper, we consider an FL framework that utilizes local differential privacy, where the client adds artificial Gaussian noise to the local model update before aggregation. To reduce the communication overhead of the differential privacy-protected model, we propose the universal vector quantization for FL with local differential privacy mechanism, which quantizes the model parameters in a universal vector quantization approach. Furthermore, we analyze the privacy performance of the proposed approach and track the privacy loss by accounting the log moments. Experiments show that even if the quantization bit is relatively small, our method can achieve model compression without reducing the accuracy of the global model.","PeriodicalId":425118,"journal":{"name":"2021 IEEE/CIC International Conference on Communications in China (ICCC)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-07-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"8","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE/CIC International Conference on Communications in China (ICCC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/iccc52777.2021.9580315","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 8

Abstract

As an emerging framework of distributed learning, federated learning (FL) has been a research focus since it enables clients to train deep learning models collaboratively without exposing their original data. Nevertheless, private information can still be inferred from the communicated model parameters by adversaries. In addition, due to the limited channel bandwidth, the model communication between clients and the server has become a serious bottleneck. In this paper, we consider an FL framework that utilizes local differential privacy, where the client adds artificial Gaussian noise to the local model update before aggregation. To reduce the communication overhead of the differential privacy-protected model, we propose the universal vector quantization for FL with local differential privacy mechanism, which quantizes the model parameters in a universal vector quantization approach. Furthermore, we analyze the privacy performance of the proposed approach and track the privacy loss by accounting the log moments. Experiments show that even if the quantization bit is relatively small, our method can achieve model compression without reducing the accuracy of the global model.
基于局部差分隐私机制的联邦学习量化通信
作为一种新兴的分布式学习框架,联邦学习(FL)一直是研究的焦点,因为它使客户能够在不暴露原始数据的情况下协作训练深度学习模型。然而,攻击者仍然可以从通信模型参数中推断出私有信息。此外,由于信道带宽有限,客户端与服务器之间的模型通信成为严重的瓶颈。在本文中,我们考虑了一个利用局部差分隐私的FL框架,其中客户端在聚合之前在局部模型更新中添加人工高斯噪声。为了减少差分隐私保护模型的通信开销,提出了一种基于局部差分隐私机制的通用矢量量化方法,该方法采用通用矢量量化方法对模型参数进行量化。此外,我们分析了该方法的隐私性能,并通过计算日志矩来跟踪隐私损失。实验表明,即使量化比特相对较小,我们的方法也可以在不降低全局模型精度的情况下实现模型压缩。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信