{"title":"An efficient accelerated gradient method with memory applicable to composite problems","authors":"Mihai I. Florea","doi":"10.1109/OPTIM-ACEMP50812.2021.9590072","DOIUrl":null,"url":null,"abstract":"Gradient Methods with Memory store at runtime part of the oracle information obtained at previous iterations. This model allows them to outperform classical gradient methods in many situations. Solving the inner problem associated with the model does incur an overhead but, for unconstrained problems where the objective has a Lipschitz gradient, this overhead has been shown to be minimal. In this work we propose an accelerated gradient method with memory applicable to composite problems. In our method the model overhead remains negligible even for constrained problems with non-differentiable objectives. Although the inner problem cannot be solved exactly, we propose a model and choose the starting point of the inner optimization scheme in a way that prevents the accumulation of errors as the algorithm progresses. Moreover, our method dynamically adjusts the convergence guarantees to exceed those of the Fast Gradient Method. The theoretical predictions are confirmed numerically on an image deblurring problem.","PeriodicalId":32117,"journal":{"name":"Bioma","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2021-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Bioma","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/OPTIM-ACEMP50812.2021.9590072","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
Gradient Methods with Memory store at runtime part of the oracle information obtained at previous iterations. This model allows them to outperform classical gradient methods in many situations. Solving the inner problem associated with the model does incur an overhead but, for unconstrained problems where the objective has a Lipschitz gradient, this overhead has been shown to be minimal. In this work we propose an accelerated gradient method with memory applicable to composite problems. In our method the model overhead remains negligible even for constrained problems with non-differentiable objectives. Although the inner problem cannot be solved exactly, we propose a model and choose the starting point of the inner optimization scheme in a way that prevents the accumulation of errors as the algorithm progresses. Moreover, our method dynamically adjusts the convergence guarantees to exceed those of the Fast Gradient Method. The theoretical predictions are confirmed numerically on an image deblurring problem.