A Nesterov-Like Gradient Tracking Algorithm for Distributed Optimization Over Directed Networks

Qingguo Lü, X. Liao, Huaqing Li, Tingwen Huang
{"title":"A Nesterov-Like Gradient Tracking Algorithm for Distributed Optimization Over Directed Networks","authors":"Qingguo Lü, X. Liao, Huaqing Li, Tingwen Huang","doi":"10.1109/TSMC.2019.2960770","DOIUrl":null,"url":null,"abstract":"In this article, we concentrate on dealing with the distributed optimization problem over a directed network, where each unit possesses its own convex cost function and the principal target is to minimize a global cost function (formulated by the average of all local cost functions) while obeying the network connectivity structure. Most of the existing methods, such as push-sum strategy, have eliminated the unbalancedness induced by the directed network via utilizing column-stochastic weights, which may be infeasible if the distributed implementation requires each unit to gain access to (at least) its out-degree information. In contrast, to be suitable for the directed networks with row-stochastic weights, we propose a new directed distributed Nesterov-like gradient tracking algorithm, named as D-DNGT, that incorporates the gradient tracking into the distributed Nesterov method with momentum terms and employs nonuniform step-sizes. D-DNGT extends a number of outstanding consensus algorithms over strongly connected directed networks. The implementation of D-DNGT is straightforward if each unit locally chooses a suitable step-size and privately regulates the weights on information that acquires from in-neighbors. If the largest step-size and the maximum momentum coefficient are positive and small sufficiently, we can prove that D-DNGT converges linearly to the optimal solution provided that the cost functions are smooth and strongly convex. We provide numerical experiments to confirm the findings in this article and contrast D-DNGT with recently proposed distributed optimization approaches.","PeriodicalId":55007,"journal":{"name":"IEEE Transactions on Systems Man and Cybernetics Part A-Systems and Humans","volume":"135 1","pages":"6258-6270"},"PeriodicalIF":0.0000,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"34","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Systems Man and Cybernetics Part A-Systems and Humans","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/TSMC.2019.2960770","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 34

Abstract

In this article, we concentrate on dealing with the distributed optimization problem over a directed network, where each unit possesses its own convex cost function and the principal target is to minimize a global cost function (formulated by the average of all local cost functions) while obeying the network connectivity structure. Most of the existing methods, such as push-sum strategy, have eliminated the unbalancedness induced by the directed network via utilizing column-stochastic weights, which may be infeasible if the distributed implementation requires each unit to gain access to (at least) its out-degree information. In contrast, to be suitable for the directed networks with row-stochastic weights, we propose a new directed distributed Nesterov-like gradient tracking algorithm, named as D-DNGT, that incorporates the gradient tracking into the distributed Nesterov method with momentum terms and employs nonuniform step-sizes. D-DNGT extends a number of outstanding consensus algorithms over strongly connected directed networks. The implementation of D-DNGT is straightforward if each unit locally chooses a suitable step-size and privately regulates the weights on information that acquires from in-neighbors. If the largest step-size and the maximum momentum coefficient are positive and small sufficiently, we can prove that D-DNGT converges linearly to the optimal solution provided that the cost functions are smooth and strongly convex. We provide numerical experiments to confirm the findings in this article and contrast D-DNGT with recently proposed distributed optimization approaches.
有向网络上分布优化的类nesterov梯度跟踪算法
在本文中,我们主要研究有向网络上的分布式优化问题,其中每个单元都有自己的凸代价函数,主要目标是在服从网络连接结构的情况下最小化全局代价函数(由所有局部代价函数的平均值表示)。现有的大多数方法,如推和策略,通过利用列随机权重消除了有向网络引起的不平衡,如果分布式实现要求每个单元至少获得其出度信息,这可能是不可行的。相比之下,为了适应具有行随机权值的有向网络,我们提出了一种新的有向分布类Nesterov梯度跟踪算法,命名为D-DNGT,该算法将梯度跟踪纳入具有动量项的分布式Nesterov方法中,并采用非均匀步长。D-DNGT在强连接有向网络上扩展了许多杰出的共识算法。如果每个单元在本地选择合适的步长并私下调节从内邻居获取的信息的权重,则D-DNGT的实现是直接的。如果最大步长和最大动量系数为正且足够小,我们可以证明D-DNGT在代价函数光滑且强凸的情况下线性收敛于最优解。我们提供了数值实验来证实本文的发现,并将D-DNGT与最近提出的分布式优化方法进行了对比。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
1
审稿时长
6.0 months
期刊介绍: The scope of the IEEE Transactions on Systems, Man, and Cybernetics: Systems includes the fields of systems engineering. It includes issue formulation, analysis and modeling, decision making, and issue interpretation for any of the systems engineering lifecycle phases associated with the definition, development, and deployment of large systems. In addition, it includes systems management, systems engineering processes, and a variety of systems engineering methods such as optimization, modeling and simulation.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信