{"title":"Distributed Randomized Gradient-Free Convex Optimization With Set Constraints Over Time-Varying Weight-Unbalanced Digraphs","authors":"Yanan Zhu;Qinghai Li;Tao Li;Guanghui Wen","doi":"10.1109/TNSE.2024.3506732","DOIUrl":null,"url":null,"abstract":"This paper explores a class of distributed constrained convex optimization problems where the objective function is a sum of <inline-formula><tex-math>$N$</tex-math></inline-formula> convex local objective functions. These functions are characterized by local non-smoothness yet adhere to Lipschitz continuity, and the optimization process is further constrained by <inline-formula><tex-math>$N$</tex-math></inline-formula> distinct closed convex sets. To delineate the structure of information exchange among agents, a series of time-varying weight-unbalance directed graphs are introduced. Furthermore, this study introduces a novel algorithm, distributed randomized gradient-free constrained optimization algorithm. This algorithm marks a significant advancement by substituting the conventional requirement for precise gradient or subgradient information in each iterative update with a random gradient-free oracle, thereby addressing scenarios where accurate gradient information is hard to obtain. A thorough convergence analysis is provided based on the smoothing parameters inherent in the local objective functions, the Lipschitz constants, and a series of standard assumptions. Significantly, the proposed algorithm can converge to an approximate optimal solution within a predetermined error threshold for the consisdered optimization problem, achieving the same convergence rate of <inline-formula><tex-math>${\\mathcal O}(\\frac{\\ln (k)}{\\sqrt{k} })$</tex-math></inline-formula> as the general randomized gradient-free algorithms when the decay step size is selected appropriately. And when at least one of the local objective functions exhibits strong convexity, the proposed algorithm can achieve a faster convergence rate, <inline-formula><tex-math>${\\mathcal O}(\\frac{1}{k})$</tex-math></inline-formula>. Finally, rigorous simulation results verify the correctness of theoretical findings.","PeriodicalId":54229,"journal":{"name":"IEEE Transactions on Network Science and Engineering","volume":"12 2","pages":"610-622"},"PeriodicalIF":6.7000,"publicationDate":"2024-11-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Network Science and Engineering","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10767401/","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, MULTIDISCIPLINARY","Score":null,"Total":0}
引用次数: 0
Abstract
This paper explores a class of distributed constrained convex optimization problems where the objective function is a sum of $N$ convex local objective functions. These functions are characterized by local non-smoothness yet adhere to Lipschitz continuity, and the optimization process is further constrained by $N$ distinct closed convex sets. To delineate the structure of information exchange among agents, a series of time-varying weight-unbalance directed graphs are introduced. Furthermore, this study introduces a novel algorithm, distributed randomized gradient-free constrained optimization algorithm. This algorithm marks a significant advancement by substituting the conventional requirement for precise gradient or subgradient information in each iterative update with a random gradient-free oracle, thereby addressing scenarios where accurate gradient information is hard to obtain. A thorough convergence analysis is provided based on the smoothing parameters inherent in the local objective functions, the Lipschitz constants, and a series of standard assumptions. Significantly, the proposed algorithm can converge to an approximate optimal solution within a predetermined error threshold for the consisdered optimization problem, achieving the same convergence rate of ${\mathcal O}(\frac{\ln (k)}{\sqrt{k} })$ as the general randomized gradient-free algorithms when the decay step size is selected appropriately. And when at least one of the local objective functions exhibits strong convexity, the proposed algorithm can achieve a faster convergence rate, ${\mathcal O}(\frac{1}{k})$. Finally, rigorous simulation results verify the correctness of theoretical findings.
期刊介绍:
The proposed journal, called the IEEE Transactions on Network Science and Engineering (TNSE), is committed to timely publishing of peer-reviewed technical articles that deal with the theory and applications of network science and the interconnections among the elements in a system that form a network. In particular, the IEEE Transactions on Network Science and Engineering publishes articles on understanding, prediction, and control of structures and behaviors of networks at the fundamental level. The types of networks covered include physical or engineered networks, information networks, biological networks, semantic networks, economic networks, social networks, and ecological networks. Aimed at discovering common principles that govern network structures, network functionalities and behaviors of networks, the journal seeks articles on understanding, prediction, and control of structures and behaviors of networks. Another trans-disciplinary focus of the IEEE Transactions on Network Science and Engineering is the interactions between and co-evolution of different genres of networks.