Gradient Compression with a Variational Coding Scheme for Federated Learning

B. Kathariya, Zhu Li, Jianle Chen, G. V. D. Auwera
{"title":"Gradient Compression with a Variational Coding Scheme for Federated Learning","authors":"B. Kathariya, Zhu Li, Jianle Chen, G. V. D. Auwera","doi":"10.1109/VCIP53242.2021.9675436","DOIUrl":null,"url":null,"abstract":"Federated Learning (FL), a distributed machine learning architecture, emerged to solve the intelligent data analysis on massive data generated at network edge-devices. With this paradigm, a model is jointly learned in parallel at edge-devices without needing to send voluminous data to a central FL server. This not only allows a model to learn in a feasible duration by reducing network latency but also preserves data privacy. Nonetheless, when thousands of edge-devices are attached to an FL framework, limited network resources inevitably impose intolerable training latency. In this work, we propose model-update compression to solve this issue in a very novel way. The proposed method learns multiple Gaussian distributions that best describe the high dimensional gradient parameters. In the FL server, high dimensional gradients are repopulated from Gaussian distributions utilizing likelihood function parameters which are communicated to the server. Since the distribution information parameters constitute a very small percentage of values compared to the high dimensional gradients themselves, our proposed method is able to save significant uplink band-width while preserving the model accuracy. Experimental results validated our claim.","PeriodicalId":114062,"journal":{"name":"2021 International Conference on Visual Communications and Image Processing (VCIP)","volume":"73 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 International Conference on Visual Communications and Image Processing (VCIP)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/VCIP53242.2021.9675436","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Federated Learning (FL), a distributed machine learning architecture, emerged to solve the intelligent data analysis on massive data generated at network edge-devices. With this paradigm, a model is jointly learned in parallel at edge-devices without needing to send voluminous data to a central FL server. This not only allows a model to learn in a feasible duration by reducing network latency but also preserves data privacy. Nonetheless, when thousands of edge-devices are attached to an FL framework, limited network resources inevitably impose intolerable training latency. In this work, we propose model-update compression to solve this issue in a very novel way. The proposed method learns multiple Gaussian distributions that best describe the high dimensional gradient parameters. In the FL server, high dimensional gradients are repopulated from Gaussian distributions utilizing likelihood function parameters which are communicated to the server. Since the distribution information parameters constitute a very small percentage of values compared to the high dimensional gradients themselves, our proposed method is able to save significant uplink band-width while preserving the model accuracy. Experimental results validated our claim.
基于变分编码的梯度压缩联邦学习
联邦学习(FL)是一种分布式机器学习架构,旨在解决网络边缘设备产生的海量数据的智能数据分析问题。使用这种范例,可以在边缘设备上并行地联合学习模型,而无需将大量数据发送到中央FL服务器。这不仅允许模型通过减少网络延迟在可行的持续时间内学习,而且还保护了数据隐私。尽管如此,当数千个边缘设备连接到FL框架时,有限的网络资源不可避免地会造成无法忍受的训练延迟。在这项工作中,我们提出了模型更新压缩以一种非常新颖的方式来解决这个问题。该方法学习最能描述高维梯度参数的多个高斯分布。在FL服务器中,利用传递给服务器的似然函数参数从高斯分布重新填充高维梯度。由于与高维梯度本身相比,分布信息参数占值的百分比非常小,因此我们提出的方法能够在保持模型精度的同时节省大量上行带宽。实验结果证实了我们的说法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信