Anis Hamadouche, Yun Wu, Andrew M. Wallace, João F. C. Mota
{"title":"有误差的近端梯度算法的更清晰边界","authors":"Anis Hamadouche, Yun Wu, Andrew M. Wallace, João F. C. Mota","doi":"10.1137/22m1480161","DOIUrl":null,"url":null,"abstract":"SIAM Journal on Optimization, Volume 34, Issue 1, Page 278-305, March 2024. <br/> Abstract. We analyze the convergence of the proximal gradient algorithm for convex composite problems in the presence of gradient and proximal computational inaccuracies. We generalize the deterministic analysis to the quasi-Fejér case and quantify the uncertainty incurred from approximate computing and early termination errors. We propose new probabilistic tighter bounds that we use to verify a simulated Model Predictive Control (MPC) with sparse controls problem solved with early termination, reduced precision, and proximal errors. We also show how the probabilistic bounds are more suitable than the deterministic ones for algorithm verification and more accurate for application performance guarantees. Under mild statistical assumptions, we also prove that some cumulative error terms follow a martingale property. And conforming to observations, e.g., in [M. Schmidt, N. L. Roux, and F. R. Bach, Convergence rates of inexact proximal-gradient methods for convex optimization, in Advances in Neural Information Processing Systems, 2011, pp. 1458–1466], we also show how the acceleration of the algorithm amplifies the gradient and proximal computational errors.","PeriodicalId":49529,"journal":{"name":"SIAM Journal on Optimization","volume":"27 1","pages":""},"PeriodicalIF":2.6000,"publicationDate":"2024-01-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Sharper Bounds for Proximal Gradient Algorithms with Errors\",\"authors\":\"Anis Hamadouche, Yun Wu, Andrew M. Wallace, João F. C. Mota\",\"doi\":\"10.1137/22m1480161\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"SIAM Journal on Optimization, Volume 34, Issue 1, Page 278-305, March 2024. <br/> Abstract. We analyze the convergence of the proximal gradient algorithm for convex composite problems in the presence of gradient and proximal computational inaccuracies. We generalize the deterministic analysis to the quasi-Fejér case and quantify the uncertainty incurred from approximate computing and early termination errors. We propose new probabilistic tighter bounds that we use to verify a simulated Model Predictive Control (MPC) with sparse controls problem solved with early termination, reduced precision, and proximal errors. We also show how the probabilistic bounds are more suitable than the deterministic ones for algorithm verification and more accurate for application performance guarantees. Under mild statistical assumptions, we also prove that some cumulative error terms follow a martingale property. And conforming to observations, e.g., in [M. Schmidt, N. L. Roux, and F. R. Bach, Convergence rates of inexact proximal-gradient methods for convex optimization, in Advances in Neural Information Processing Systems, 2011, pp. 1458–1466], we also show how the acceleration of the algorithm amplifies the gradient and proximal computational errors.\",\"PeriodicalId\":49529,\"journal\":{\"name\":\"SIAM Journal on Optimization\",\"volume\":\"27 1\",\"pages\":\"\"},\"PeriodicalIF\":2.6000,\"publicationDate\":\"2024-01-19\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"SIAM Journal on Optimization\",\"FirstCategoryId\":\"100\",\"ListUrlMain\":\"https://doi.org/10.1137/22m1480161\",\"RegionNum\":1,\"RegionCategory\":\"数学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"MATHEMATICS, APPLIED\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"SIAM Journal on Optimization","FirstCategoryId":"100","ListUrlMain":"https://doi.org/10.1137/22m1480161","RegionNum":1,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"MATHEMATICS, APPLIED","Score":null,"Total":0}
引用次数: 0
摘要
SIAM 优化期刊》,第 34 卷,第 1 期,第 278-305 页,2024 年 3 月。 摘要我们分析了在梯度和近端计算不准确的情况下凸复合问题的近端梯度算法的收敛性。我们将确定性分析推广到准 Fejér 情况,并量化了近似计算和提前终止误差带来的不确定性。我们提出了新的概率紧缩边界,并用它来验证一个带有稀疏控制的模拟模型预测控制 (MPC) 问题,该问题在解决时出现了提前终止、精度降低和近似误差。我们还展示了概率边界如何比确定边界更适合算法验证,以及如何更准确地保证应用性能。在温和的统计假设条件下,我们还证明了某些累积误差项遵循马氏特性。这也符合 [M. Schmidt, N. L. Rouge] 等人的观察。Schmidt, N. L. Roux, and F. R. Bach, Convergence rates of inexact proximal-gradient methods for convex optimization, in Advances in Neural Information Processing Systems, 2011, pp.
Sharper Bounds for Proximal Gradient Algorithms with Errors
SIAM Journal on Optimization, Volume 34, Issue 1, Page 278-305, March 2024. Abstract. We analyze the convergence of the proximal gradient algorithm for convex composite problems in the presence of gradient and proximal computational inaccuracies. We generalize the deterministic analysis to the quasi-Fejér case and quantify the uncertainty incurred from approximate computing and early termination errors. We propose new probabilistic tighter bounds that we use to verify a simulated Model Predictive Control (MPC) with sparse controls problem solved with early termination, reduced precision, and proximal errors. We also show how the probabilistic bounds are more suitable than the deterministic ones for algorithm verification and more accurate for application performance guarantees. Under mild statistical assumptions, we also prove that some cumulative error terms follow a martingale property. And conforming to observations, e.g., in [M. Schmidt, N. L. Roux, and F. R. Bach, Convergence rates of inexact proximal-gradient methods for convex optimization, in Advances in Neural Information Processing Systems, 2011, pp. 1458–1466], we also show how the acceleration of the algorithm amplifies the gradient and proximal computational errors.
期刊介绍:
The SIAM Journal on Optimization contains research articles on the theory and practice of optimization. The areas addressed include linear and quadratic programming, convex programming, nonlinear programming, complementarity problems, stochastic optimization, combinatorial optimization, integer programming, and convex, nonsmooth and variational analysis. Contributions may emphasize optimization theory, algorithms, software, computational practice, applications, or the links between these subjects.