{"title":"带重启的安德森加速梯度法的下降特性","authors":"Wenqing Ouyang, Yang Liu, Andre Milzarek","doi":"10.1137/22m151460x","DOIUrl":null,"url":null,"abstract":"SIAM Journal on Optimization, Volume 34, Issue 1, Page 336-365, March 2024. <br/> Abstract. Anderson acceleration ([math]) is a popular acceleration technique to enhance the convergence of fixed-point schemes. The analysis of [math] approaches often focuses on the convergence behavior of a corresponding fixed-point residual, while the behavior of the underlying objective function values along the accelerated iterates is currently not well understood. In this paper, we investigate local properties of [math] with restarting applied to a basic gradient scheme ([math]) in terms of function values. Specifically, we show that [math] is a local descent method and that it can decrease the objective function at a rate no slower than the gradient method up to higher-order error terms. These new results theoretically support the good numerical performance of [math] when heuristic descent conditions are used for globalization and they provide a novel perspective on the convergence analysis of [math] that is more amenable to nonconvex optimization problems. Numerical experiments are conducted to illustrate our theoretical findings.","PeriodicalId":49529,"journal":{"name":"SIAM Journal on Optimization","volume":"3 1","pages":""},"PeriodicalIF":2.6000,"publicationDate":"2024-01-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Descent Properties of an Anderson Accelerated Gradient Method with Restarting\",\"authors\":\"Wenqing Ouyang, Yang Liu, Andre Milzarek\",\"doi\":\"10.1137/22m151460x\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"SIAM Journal on Optimization, Volume 34, Issue 1, Page 336-365, March 2024. <br/> Abstract. Anderson acceleration ([math]) is a popular acceleration technique to enhance the convergence of fixed-point schemes. The analysis of [math] approaches often focuses on the convergence behavior of a corresponding fixed-point residual, while the behavior of the underlying objective function values along the accelerated iterates is currently not well understood. In this paper, we investigate local properties of [math] with restarting applied to a basic gradient scheme ([math]) in terms of function values. Specifically, we show that [math] is a local descent method and that it can decrease the objective function at a rate no slower than the gradient method up to higher-order error terms. These new results theoretically support the good numerical performance of [math] when heuristic descent conditions are used for globalization and they provide a novel perspective on the convergence analysis of [math] that is more amenable to nonconvex optimization problems. Numerical experiments are conducted to illustrate our theoretical findings.\",\"PeriodicalId\":49529,\"journal\":{\"name\":\"SIAM Journal on Optimization\",\"volume\":\"3 1\",\"pages\":\"\"},\"PeriodicalIF\":2.6000,\"publicationDate\":\"2024-01-19\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"SIAM Journal on Optimization\",\"FirstCategoryId\":\"100\",\"ListUrlMain\":\"https://doi.org/10.1137/22m151460x\",\"RegionNum\":1,\"RegionCategory\":\"数学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"MATHEMATICS, APPLIED\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"SIAM Journal on Optimization","FirstCategoryId":"100","ListUrlMain":"https://doi.org/10.1137/22m151460x","RegionNum":1,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"MATHEMATICS, APPLIED","Score":null,"Total":0}
Descent Properties of an Anderson Accelerated Gradient Method with Restarting
SIAM Journal on Optimization, Volume 34, Issue 1, Page 336-365, March 2024. Abstract. Anderson acceleration ([math]) is a popular acceleration technique to enhance the convergence of fixed-point schemes. The analysis of [math] approaches often focuses on the convergence behavior of a corresponding fixed-point residual, while the behavior of the underlying objective function values along the accelerated iterates is currently not well understood. In this paper, we investigate local properties of [math] with restarting applied to a basic gradient scheme ([math]) in terms of function values. Specifically, we show that [math] is a local descent method and that it can decrease the objective function at a rate no slower than the gradient method up to higher-order error terms. These new results theoretically support the good numerical performance of [math] when heuristic descent conditions are used for globalization and they provide a novel perspective on the convergence analysis of [math] that is more amenable to nonconvex optimization problems. Numerical experiments are conducted to illustrate our theoretical findings.
期刊介绍:
The SIAM Journal on Optimization contains research articles on the theory and practice of optimization. The areas addressed include linear and quadratic programming, convex programming, nonlinear programming, complementarity problems, stochastic optimization, combinatorial optimization, integer programming, and convex, nonsmooth and variational analysis. Contributions may emphasize optimization theory, algorithms, software, computational practice, applications, or the links between these subjects.