时变网络非光滑分布优化的正则化Fenchel对偶梯度方法

Xuyang Wu, K. C. Sou, Jie Lu
{"title":"时变网络非光滑分布优化的正则化Fenchel对偶梯度方法","authors":"Xuyang Wu, K. C. Sou, Jie Lu","doi":"10.1080/10556788.2023.2189713","DOIUrl":null,"url":null,"abstract":"In this paper, we develop a regularized Fenchel dual gradient method (RFDGM), which allows nodes in a time-varying undirected network to find a common decision, in a fully distributed fashion, for minimizing the sum of their local objective functions subject to their local constraints. Different from most existing distributed optimization algorithms that also cope with time-varying networks, RFDGM is able to handle problems with general convex objective functions and distinct local constraints, and still has non-asymptotic convergence results. Specifically, under a standard network connectivity condition, we show that RFDGM is guaranteed to reach ϵ-accuracy in both optimality and feasibility within iterations. Such iteration complexity can be improved to if the local objective functions are strongly convex but not necessarily differentiable. Finally, simulation results demonstrate the competence of RFDGM in practice.","PeriodicalId":124811,"journal":{"name":"Optimization Methods and Software","volume":"87 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A Fenchel dual gradient method enabling regularization for nonsmooth distributed optimization over time-varying networks\",\"authors\":\"Xuyang Wu, K. C. Sou, Jie Lu\",\"doi\":\"10.1080/10556788.2023.2189713\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In this paper, we develop a regularized Fenchel dual gradient method (RFDGM), which allows nodes in a time-varying undirected network to find a common decision, in a fully distributed fashion, for minimizing the sum of their local objective functions subject to their local constraints. Different from most existing distributed optimization algorithms that also cope with time-varying networks, RFDGM is able to handle problems with general convex objective functions and distinct local constraints, and still has non-asymptotic convergence results. Specifically, under a standard network connectivity condition, we show that RFDGM is guaranteed to reach ϵ-accuracy in both optimality and feasibility within iterations. Such iteration complexity can be improved to if the local objective functions are strongly convex but not necessarily differentiable. Finally, simulation results demonstrate the competence of RFDGM in practice.\",\"PeriodicalId\":124811,\"journal\":{\"name\":\"Optimization Methods and Software\",\"volume\":\"87 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-03-28\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Optimization Methods and Software\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1080/10556788.2023.2189713\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Optimization Methods and Software","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1080/10556788.2023.2189713","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

本文提出了一种正则化的Fenchel对偶梯度方法(RFDGM),该方法允许时变无向网络中的节点以完全分布的方式找到一个共同决策,以最小化受局部约束的局部目标函数的和。与现有大多数同样处理时变网络的分布式优化算法不同,RFDGM能够处理具有一般凸目标函数和不同局部约束的问题,并且仍然具有非渐近收敛的结果。具体而言,在标准网络连接条件下,我们证明了RFDGM在迭代内的最优性和可行性都保证达到ϵ-accuracy。如果局部目标函数是强凸的,但不一定是可微的,则可以将迭代复杂度提高到。最后,仿真结果验证了RFDGM在实际应用中的能力。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
A Fenchel dual gradient method enabling regularization for nonsmooth distributed optimization over time-varying networks
In this paper, we develop a regularized Fenchel dual gradient method (RFDGM), which allows nodes in a time-varying undirected network to find a common decision, in a fully distributed fashion, for minimizing the sum of their local objective functions subject to their local constraints. Different from most existing distributed optimization algorithms that also cope with time-varying networks, RFDGM is able to handle problems with general convex objective functions and distinct local constraints, and still has non-asymptotic convergence results. Specifically, under a standard network connectivity condition, we show that RFDGM is guaranteed to reach ϵ-accuracy in both optimality and feasibility within iterations. Such iteration complexity can be improved to if the local objective functions are strongly convex but not necessarily differentiable. Finally, simulation results demonstrate the competence of RFDGM in practice.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信