On the O(1/k) Convergence of Distributed Gradient Methods Under Random Quantization

IF 2.4 Q2 AUTOMATION & CONTROL SYSTEMS
Amit Dutta;Thinh T. Doan
{"title":"On the O(1/k) Convergence of Distributed Gradient Methods Under Random Quantization","authors":"Amit Dutta;Thinh T. Doan","doi":"10.1109/LCSYS.2024.3519013","DOIUrl":null,"url":null,"abstract":"We revisit the so-called distributed two-time-scale stochastic gradient method for solving a strongly convex optimization problem over a network of agents in a bandwidth-limited regime. In this setting, the agents can only exchange the quantized values of their local variables using a limited number of communication bits. Due to quantization errors, the existing best-known convergence results of this method can only achieve a suboptimal rate <inline-formula> <tex-math>$\\mathcal {O}$ </tex-math></inline-formula>(<inline-formula> <tex-math>$1/\\sqrt {k}$ </tex-math></inline-formula>), while the optimal rate is <inline-formula> <tex-math>$\\mathcal {O}$ </tex-math></inline-formula>(<inline-formula> <tex-math>$1/k$ </tex-math></inline-formula>) under no quantization, where k is the time iteration. The main contribution of this letter is to address this theoretical gap, where we study a sufficient condition and develop an innovative analysis and step-size selection to achieve the optimal convergence rate <inline-formula> <tex-math>$\\mathcal {O}$ </tex-math></inline-formula>(<inline-formula> <tex-math>$1/k$ </tex-math></inline-formula>) for the distributed gradient methods given any number of quantization bits. We provide numerical simulations to illustrate the effectiveness of our theoretical results.","PeriodicalId":37235,"journal":{"name":"IEEE Control Systems Letters","volume":"8 ","pages":"2967-2972"},"PeriodicalIF":2.4000,"publicationDate":"2024-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Control Systems Letters","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10804186/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"AUTOMATION & CONTROL SYSTEMS","Score":null,"Total":0}
引用次数: 0

Abstract

We revisit the so-called distributed two-time-scale stochastic gradient method for solving a strongly convex optimization problem over a network of agents in a bandwidth-limited regime. In this setting, the agents can only exchange the quantized values of their local variables using a limited number of communication bits. Due to quantization errors, the existing best-known convergence results of this method can only achieve a suboptimal rate $\mathcal {O}$ ( $1/\sqrt {k}$ ), while the optimal rate is $\mathcal {O}$ ( $1/k$ ) under no quantization, where k is the time iteration. The main contribution of this letter is to address this theoretical gap, where we study a sufficient condition and develop an innovative analysis and step-size selection to achieve the optimal convergence rate $\mathcal {O}$ ( $1/k$ ) for the distributed gradient methods given any number of quantization bits. We provide numerical simulations to illustrate the effectiveness of our theoretical results.
随机量化下分布梯度方法的O(1/k)收敛性
我们重新审视了所谓的分布式双时间尺度随机梯度方法,用于在带宽有限的情况下解决代理网络上的强凸优化问题。在此设置中,代理只能使用有限数量的通信位交换其局部变量的量化值。由于量化误差,该方法现有最著名的收敛结果只能达到次优速率$\mathcal {O}$ ($1/\sqrt {k}$),而最优速率为$\mathcal {O}$ ($1/k$),其中k为时间迭代。这封信的主要贡献是解决这一理论差距,我们研究了一个充分条件,并开发了一种创新的分析和步长选择,以实现给定任意数量量化比特的分布式梯度方法的最佳收敛率$\mathcal {O}$ ($1/k$)。我们提供了数值模拟来说明我们的理论结果的有效性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
IEEE Control Systems Letters
IEEE Control Systems Letters Mathematics-Control and Optimization
CiteScore
4.40
自引率
13.30%
发文量
471
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信