{"title":"受约束弱凸优化的 AdaGrad 高概率边界","authors":"Yusu Hong , Junhong Lin","doi":"10.1016/j.jco.2024.101889","DOIUrl":null,"url":null,"abstract":"<div><p>In this paper, we study the high probability convergence of AdaGrad-Norm for constrained, non-smooth, weakly convex optimization with bounded noise and sub-Gaussian noise cases. We also investigate a more general accelerated gradient descent (AGD) template (Ghadimi and Lan, 2016) encompassing the AdaGrad-Norm, the Nesterov's accelerated gradient descent, and the RSAG (Ghadimi and Lan, 2016) with different parameter choices. We provide a high probability convergence rate <span><math><mover><mrow><mi>O</mi></mrow><mrow><mo>˜</mo></mrow></mover><mo>(</mo><mn>1</mn><mo>/</mo><msqrt><mrow><mi>T</mi></mrow></msqrt><mo>)</mo></math></span> without knowing the information of the weak convexity parameter and the gradient bound to tune the step-sizes.</p></div>","PeriodicalId":50227,"journal":{"name":"Journal of Complexity","volume":null,"pages":null},"PeriodicalIF":1.8000,"publicationDate":"2024-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0885064X24000669/pdfft?md5=7c5c4999e38fd8c865761fe3213f35cf&pid=1-s2.0-S0885064X24000669-main.pdf","citationCount":"0","resultStr":"{\"title\":\"High probability bounds on AdaGrad for constrained weakly convex optimization\",\"authors\":\"Yusu Hong , Junhong Lin\",\"doi\":\"10.1016/j.jco.2024.101889\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>In this paper, we study the high probability convergence of AdaGrad-Norm for constrained, non-smooth, weakly convex optimization with bounded noise and sub-Gaussian noise cases. We also investigate a more general accelerated gradient descent (AGD) template (Ghadimi and Lan, 2016) encompassing the AdaGrad-Norm, the Nesterov's accelerated gradient descent, and the RSAG (Ghadimi and Lan, 2016) with different parameter choices. We provide a high probability convergence rate <span><math><mover><mrow><mi>O</mi></mrow><mrow><mo>˜</mo></mrow></mover><mo>(</mo><mn>1</mn><mo>/</mo><msqrt><mrow><mi>T</mi></mrow></msqrt><mo>)</mo></math></span> without knowing the information of the weak convexity parameter and the gradient bound to tune the step-sizes.</p></div>\",\"PeriodicalId\":50227,\"journal\":{\"name\":\"Journal of Complexity\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":1.8000,\"publicationDate\":\"2024-07-31\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.sciencedirect.com/science/article/pii/S0885064X24000669/pdfft?md5=7c5c4999e38fd8c865761fe3213f35cf&pid=1-s2.0-S0885064X24000669-main.pdf\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Complexity\",\"FirstCategoryId\":\"100\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0885064X24000669\",\"RegionNum\":2,\"RegionCategory\":\"数学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"MATHEMATICS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Complexity","FirstCategoryId":"100","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0885064X24000669","RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"MATHEMATICS","Score":null,"Total":0}
High probability bounds on AdaGrad for constrained weakly convex optimization
In this paper, we study the high probability convergence of AdaGrad-Norm for constrained, non-smooth, weakly convex optimization with bounded noise and sub-Gaussian noise cases. We also investigate a more general accelerated gradient descent (AGD) template (Ghadimi and Lan, 2016) encompassing the AdaGrad-Norm, the Nesterov's accelerated gradient descent, and the RSAG (Ghadimi and Lan, 2016) with different parameter choices. We provide a high probability convergence rate without knowing the information of the weak convexity parameter and the gradient bound to tune the step-sizes.
期刊介绍:
The multidisciplinary Journal of Complexity publishes original research papers that contain substantial mathematical results on complexity as broadly conceived. Outstanding review papers will also be published. In the area of computational complexity, the focus is on complexity over the reals, with the emphasis on lower bounds and optimal algorithms. The Journal of Complexity also publishes articles that provide major new algorithms or make important progress on upper bounds. Other models of computation, such as the Turing machine model, are also of interest. Computational complexity results in a wide variety of areas are solicited.
Areas Include:
• Approximation theory
• Biomedical computing
• Compressed computing and sensing
• Computational finance
• Computational number theory
• Computational stochastics
• Control theory
• Cryptography
• Design of experiments
• Differential equations
• Discrete problems
• Distributed and parallel computation
• High and infinite-dimensional problems
• Information-based complexity
• Inverse and ill-posed problems
• Machine learning
• Markov chain Monte Carlo
• Monte Carlo and quasi-Monte Carlo
• Multivariate integration and approximation
• Noisy data
• Nonlinear and algebraic equations
• Numerical analysis
• Operator equations
• Optimization
• Quantum computing
• Scientific computation
• Tractability of multivariate problems
• Vision and image understanding.