Exact gradient methods with memory

Mihai I. Florea
{"title":"Exact gradient methods with memory","authors":"Mihai I. Florea","doi":"10.1080/10556788.2022.2091559","DOIUrl":null,"url":null,"abstract":"ABSTRACT The Inexact Gradient Method with Memory (IGMM) is able to considerably outperform the Gradient Method by employing a piece-wise linear lower model on the smooth part of the objective. However, the auxiliary problem can only be solved within a fixed tolerance at every iteration. The need to contain the inexactness narrows the range of problems to which IGMM can be applied and degrades the worst-case convergence rate. In this work, we show how a simple modification of IGMM removes the tolerance parameter from the analysis. The resulting Exact Gradient Method with Memory (EGMM) is as broadly applicable as the Bregman Distance Gradient Method/NoLips and has the same worst-case rate of , the best for its class. Under necessarily stricter assumptions, we can accelerate EGMM without error accumulation yielding an Accelerated Gradient Method with Memory (AGMM) possessing a worst-case rate of . In our preliminary computational experiments EGMM displays excellent performance, sometimes surpassing accelerated methods. When the model discards old information, AGMM also consistently exceeds the Fast Gradient Method.","PeriodicalId":124811,"journal":{"name":"Optimization Methods and Software","volume":"33 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-07-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Optimization Methods and Software","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1080/10556788.2022.2091559","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2

Abstract

ABSTRACT The Inexact Gradient Method with Memory (IGMM) is able to considerably outperform the Gradient Method by employing a piece-wise linear lower model on the smooth part of the objective. However, the auxiliary problem can only be solved within a fixed tolerance at every iteration. The need to contain the inexactness narrows the range of problems to which IGMM can be applied and degrades the worst-case convergence rate. In this work, we show how a simple modification of IGMM removes the tolerance parameter from the analysis. The resulting Exact Gradient Method with Memory (EGMM) is as broadly applicable as the Bregman Distance Gradient Method/NoLips and has the same worst-case rate of , the best for its class. Under necessarily stricter assumptions, we can accelerate EGMM without error accumulation yielding an Accelerated Gradient Method with Memory (AGMM) possessing a worst-case rate of . In our preliminary computational experiments EGMM displays excellent performance, sometimes surpassing accelerated methods. When the model discards old information, AGMM also consistently exceeds the Fast Gradient Method.
具有记忆的精确梯度方法
具有记忆的不精确梯度方法(IGMM)通过在目标的光滑部分采用分段线性下模型,大大优于梯度方法。然而,辅助问题在每次迭代中只能在一个固定的公差范围内得到解决。控制不精确性的需要缩小了IGMM可以应用的问题范围,降低了最坏情况下的收敛速度。在这项工作中,我们展示了对IGMM的简单修改如何从分析中删除公差参数。所得到的具有记忆的精确梯度法(EGMM)与Bregman距离梯度法/NoLips一样广泛适用,并且具有相同的最坏情况率,在同类中是最好的。在必要的更严格的假设下,我们可以在没有误差积累的情况下加速EGMM,从而得到具有最坏情况率的记忆加速梯度法(AGMM)。在我们的初步计算实验中,EGMM显示出优异的性能,有时甚至超过了加速方法。当模型丢弃旧信息时,AGMM也始终优于快速梯度方法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信