David Newton, Raghu Bollapragada, Raghu Pasupathy, Nung Kwan Yip
{"title":"平滑随机优化的回溯逼近法","authors":"David Newton, Raghu Bollapragada, Raghu Pasupathy, Nung Kwan Yip","doi":"10.1287/moor.2022.0136","DOIUrl":null,"url":null,"abstract":"Stochastic Gradient (SG) is the de facto iterative technique to solve stochastic optimization (SO) problems with a smooth (nonconvex) objective f and a stochastic first-order oracle. SG’s attractiveness is due in part to its simplicity of executing a single step along the negative subsampled gradient direction to update the incumbent iterate. In this paper, we question SG’s choice of executing a single step as opposed to multiple steps between subsample updates. Our investigation leads naturally to generalizing SG into Retrospective Approximation (RA), where, during each iteration, a “deterministic solver” executes possibly multiple steps on a subsampled deterministic problem and stops when further solving is deemed unnecessary from the standpoint of statistical efficiency. RA thus formalizes what is appealing for implementation—during each iteration, “plug in” a solver—for example, L-BFGS line search or Newton-CG—as is, and solve only to the extent necessary. We develop a complete theory using relative error of the observed gradients as the principal object, demonstrating that almost sure and L<jats:sub>1</jats:sub> consistency of RA are preserved under especially weak conditions when sample sizes are increased at appropriate rates. We also characterize the iteration and oracle complexity (for linear and sublinear solvers) of RA and identify a practical termination criterion leading to optimal complexity rates. To subsume nonconvex f, we present a certain “random central limit theorem” that incorporates the effect of curvature across all first-order critical points, demonstrating that the asymptotic behavior is described by a certain mixture of normals. The message from our numerical experiments is that the ability of RA to incorporate existing second-order deterministic solvers in a strategic manner might be important from the standpoint of dispensing with hyper-parameter tuning.Funding: R. Pasupathy received financial support from the Office of Naval Research [Grants N000141712295 and 13000991]. R. Bollapragada received financial support from the Lawrence Livermore National Laboratory and the National Science Foundation [Grant NSF DMS 2324643].","PeriodicalId":1,"journal":{"name":"Accounts of Chemical Research","volume":null,"pages":null},"PeriodicalIF":16.4000,"publicationDate":"2024-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A Retrospective Approximation Approach for Smooth Stochastic Optimization\",\"authors\":\"David Newton, Raghu Bollapragada, Raghu Pasupathy, Nung Kwan Yip\",\"doi\":\"10.1287/moor.2022.0136\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Stochastic Gradient (SG) is the de facto iterative technique to solve stochastic optimization (SO) problems with a smooth (nonconvex) objective f and a stochastic first-order oracle. SG’s attractiveness is due in part to its simplicity of executing a single step along the negative subsampled gradient direction to update the incumbent iterate. In this paper, we question SG’s choice of executing a single step as opposed to multiple steps between subsample updates. Our investigation leads naturally to generalizing SG into Retrospective Approximation (RA), where, during each iteration, a “deterministic solver” executes possibly multiple steps on a subsampled deterministic problem and stops when further solving is deemed unnecessary from the standpoint of statistical efficiency. RA thus formalizes what is appealing for implementation—during each iteration, “plug in” a solver—for example, L-BFGS line search or Newton-CG—as is, and solve only to the extent necessary. We develop a complete theory using relative error of the observed gradients as the principal object, demonstrating that almost sure and L<jats:sub>1</jats:sub> consistency of RA are preserved under especially weak conditions when sample sizes are increased at appropriate rates. We also characterize the iteration and oracle complexity (for linear and sublinear solvers) of RA and identify a practical termination criterion leading to optimal complexity rates. To subsume nonconvex f, we present a certain “random central limit theorem” that incorporates the effect of curvature across all first-order critical points, demonstrating that the asymptotic behavior is described by a certain mixture of normals. The message from our numerical experiments is that the ability of RA to incorporate existing second-order deterministic solvers in a strategic manner might be important from the standpoint of dispensing with hyper-parameter tuning.Funding: R. Pasupathy received financial support from the Office of Naval Research [Grants N000141712295 and 13000991]. R. Bollapragada received financial support from the Lawrence Livermore National Laboratory and the National Science Foundation [Grant NSF DMS 2324643].\",\"PeriodicalId\":1,\"journal\":{\"name\":\"Accounts of Chemical Research\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":16.4000,\"publicationDate\":\"2024-09-09\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Accounts of Chemical Research\",\"FirstCategoryId\":\"100\",\"ListUrlMain\":\"https://doi.org/10.1287/moor.2022.0136\",\"RegionNum\":1,\"RegionCategory\":\"化学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"CHEMISTRY, MULTIDISCIPLINARY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Accounts of Chemical Research","FirstCategoryId":"100","ListUrlMain":"https://doi.org/10.1287/moor.2022.0136","RegionNum":1,"RegionCategory":"化学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"CHEMISTRY, MULTIDISCIPLINARY","Score":null,"Total":0}
A Retrospective Approximation Approach for Smooth Stochastic Optimization
Stochastic Gradient (SG) is the de facto iterative technique to solve stochastic optimization (SO) problems with a smooth (nonconvex) objective f and a stochastic first-order oracle. SG’s attractiveness is due in part to its simplicity of executing a single step along the negative subsampled gradient direction to update the incumbent iterate. In this paper, we question SG’s choice of executing a single step as opposed to multiple steps between subsample updates. Our investigation leads naturally to generalizing SG into Retrospective Approximation (RA), where, during each iteration, a “deterministic solver” executes possibly multiple steps on a subsampled deterministic problem and stops when further solving is deemed unnecessary from the standpoint of statistical efficiency. RA thus formalizes what is appealing for implementation—during each iteration, “plug in” a solver—for example, L-BFGS line search or Newton-CG—as is, and solve only to the extent necessary. We develop a complete theory using relative error of the observed gradients as the principal object, demonstrating that almost sure and L1 consistency of RA are preserved under especially weak conditions when sample sizes are increased at appropriate rates. We also characterize the iteration and oracle complexity (for linear and sublinear solvers) of RA and identify a practical termination criterion leading to optimal complexity rates. To subsume nonconvex f, we present a certain “random central limit theorem” that incorporates the effect of curvature across all first-order critical points, demonstrating that the asymptotic behavior is described by a certain mixture of normals. The message from our numerical experiments is that the ability of RA to incorporate existing second-order deterministic solvers in a strategic manner might be important from the standpoint of dispensing with hyper-parameter tuning.Funding: R. Pasupathy received financial support from the Office of Naval Research [Grants N000141712295 and 13000991]. R. Bollapragada received financial support from the Lawrence Livermore National Laboratory and the National Science Foundation [Grant NSF DMS 2324643].
期刊介绍:
Accounts of Chemical Research presents short, concise and critical articles offering easy-to-read overviews of basic research and applications in all areas of chemistry and biochemistry. These short reviews focus on research from the author’s own laboratory and are designed to teach the reader about a research project. In addition, Accounts of Chemical Research publishes commentaries that give an informed opinion on a current research problem. Special Issues online are devoted to a single topic of unusual activity and significance.
Accounts of Chemical Research replaces the traditional article abstract with an article "Conspectus." These entries synopsize the research affording the reader a closer look at the content and significance of an article. Through this provision of a more detailed description of the article contents, the Conspectus enhances the article's discoverability by search engines and the exposure for the research.