无线网络联合学习的随机梯度压缩

Xiaohan Lin, Liu Yuan, Fangjiong Chen, Huang Yang, Xiaohu Ge
{"title":"无线网络联合学习的随机梯度压缩","authors":"Xiaohan Lin, Liu Yuan, Fangjiong Chen, Huang Yang, Xiaohu Ge","doi":"10.23919/JCC.fa.2022-0660.202404","DOIUrl":null,"url":null,"abstract":"As a mature distributed machine learning paradigm, federated learning enables wireless edge devices to collaboratively train a shared AI-model by stochastic gradient descent (SGD). However, devices need to upload high-dimensional stochastic gradients to edge server in training, which cause severe communication bottleneck. To address this problem, we compress the communication by sparsifying and quantizing the stochastic gradients of edge devices. We first derive a closed form of the communication compression in terms of sparsification and quantization factors. Then, the convergence rate of this communication-compressed system is analyzed and several insights are obtained. Finally, we formulate and deal with the quantization resource allocation problem for the goal of minimizing the convergence upper bound, under the constraint of multiple-access channel capacity. Simulations show that the proposed scheme outperforms the benchmarks.","PeriodicalId":504777,"journal":{"name":"China Communications","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Stochastic gradient compression for federated learning over wireless network\",\"authors\":\"Xiaohan Lin, Liu Yuan, Fangjiong Chen, Huang Yang, Xiaohu Ge\",\"doi\":\"10.23919/JCC.fa.2022-0660.202404\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"As a mature distributed machine learning paradigm, federated learning enables wireless edge devices to collaboratively train a shared AI-model by stochastic gradient descent (SGD). However, devices need to upload high-dimensional stochastic gradients to edge server in training, which cause severe communication bottleneck. To address this problem, we compress the communication by sparsifying and quantizing the stochastic gradients of edge devices. We first derive a closed form of the communication compression in terms of sparsification and quantization factors. Then, the convergence rate of this communication-compressed system is analyzed and several insights are obtained. Finally, we formulate and deal with the quantization resource allocation problem for the goal of minimizing the convergence upper bound, under the constraint of multiple-access channel capacity. Simulations show that the proposed scheme outperforms the benchmarks.\",\"PeriodicalId\":504777,\"journal\":{\"name\":\"China Communications\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-04-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"China Communications\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.23919/JCC.fa.2022-0660.202404\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"China Communications","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.23919/JCC.fa.2022-0660.202404","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

作为一种成熟的分布式机器学习范式,联盟学习能让无线边缘设备通过随机梯度下降(SGD)协作训练一个共享的人工智能模型。然而,设备在训练时需要向边缘服务器上传高维随机梯度,这就造成了严重的通信瓶颈。为了解决这个问题,我们通过对边缘设备的随机梯度进行稀疏化和量化来压缩通信量。我们首先用稀疏化和量化因子推导出通信压缩的闭合形式。然后,我们分析了这种通信压缩系统的收敛速率,并获得了一些启示。最后,在多址信道容量的约束下,我们提出并处理了量化资源分配问题,以实现收敛上限最小化的目标。仿真结果表明,所提出的方案优于基准方案。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Stochastic gradient compression for federated learning over wireless network
As a mature distributed machine learning paradigm, federated learning enables wireless edge devices to collaboratively train a shared AI-model by stochastic gradient descent (SGD). However, devices need to upload high-dimensional stochastic gradients to edge server in training, which cause severe communication bottleneck. To address this problem, we compress the communication by sparsifying and quantizing the stochastic gradients of edge devices. We first derive a closed form of the communication compression in terms of sparsification and quantization factors. Then, the convergence rate of this communication-compressed system is analyzed and several insights are obtained. Finally, we formulate and deal with the quantization resource allocation problem for the goal of minimizing the convergence upper bound, under the constraint of multiple-access channel capacity. Simulations show that the proposed scheme outperforms the benchmarks.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信