随机斯蒂芬森法

IF 1.6 2区 数学 Q2 MATHEMATICS, APPLIED
Minda Zhao, Zehua Lai, Lek-Heng Lim
{"title":"随机斯蒂芬森法","authors":"Minda Zhao, Zehua Lai, Lek-Heng Lim","doi":"10.1007/s10589-024-00583-7","DOIUrl":null,"url":null,"abstract":"<p>Is it possible for a first-order method, i.e., only first derivatives allowed, to be quadratically convergent? For univariate loss functions, the answer is yes—the <i>Steffensen method</i> avoids second derivatives and is still quadratically convergent like Newton method. By incorporating a specific step size we can even push its convergence order beyond quadratic to <span>\\(1+\\sqrt{2} \\approx 2.414\\)</span>. While such high convergence orders are a pointless overkill for a deterministic algorithm, they become rewarding when the algorithm is randomized for problems of massive sizes, as randomization invariably compromises convergence speed. We will introduce two adaptive learning rates inspired by the Steffensen method, intended for use in a stochastic optimization setting and requires no hyperparameter tuning aside from batch size. Extensive experiments show that they compare favorably with several existing first-order methods. When restricted to a quadratic objective, our stochastic Steffensen methods reduce to randomized Kaczmarz method—note that this is not true for SGD or SLBFGS—and thus we may also view our methods as a generalization of randomized Kaczmarz to arbitrary objectives.</p>","PeriodicalId":55227,"journal":{"name":"Computational Optimization and Applications","volume":null,"pages":null},"PeriodicalIF":1.6000,"publicationDate":"2024-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Stochastic Steffensen method\",\"authors\":\"Minda Zhao, Zehua Lai, Lek-Heng Lim\",\"doi\":\"10.1007/s10589-024-00583-7\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>Is it possible for a first-order method, i.e., only first derivatives allowed, to be quadratically convergent? For univariate loss functions, the answer is yes—the <i>Steffensen method</i> avoids second derivatives and is still quadratically convergent like Newton method. By incorporating a specific step size we can even push its convergence order beyond quadratic to <span>\\\\(1+\\\\sqrt{2} \\\\approx 2.414\\\\)</span>. While such high convergence orders are a pointless overkill for a deterministic algorithm, they become rewarding when the algorithm is randomized for problems of massive sizes, as randomization invariably compromises convergence speed. We will introduce two adaptive learning rates inspired by the Steffensen method, intended for use in a stochastic optimization setting and requires no hyperparameter tuning aside from batch size. Extensive experiments show that they compare favorably with several existing first-order methods. When restricted to a quadratic objective, our stochastic Steffensen methods reduce to randomized Kaczmarz method—note that this is not true for SGD or SLBFGS—and thus we may also view our methods as a generalization of randomized Kaczmarz to arbitrary objectives.</p>\",\"PeriodicalId\":55227,\"journal\":{\"name\":\"Computational Optimization and Applications\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":1.6000,\"publicationDate\":\"2024-06-07\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Computational Optimization and Applications\",\"FirstCategoryId\":\"100\",\"ListUrlMain\":\"https://doi.org/10.1007/s10589-024-00583-7\",\"RegionNum\":2,\"RegionCategory\":\"数学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"MATHEMATICS, APPLIED\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computational Optimization and Applications","FirstCategoryId":"100","ListUrlMain":"https://doi.org/10.1007/s10589-024-00583-7","RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"MATHEMATICS, APPLIED","Score":null,"Total":0}
引用次数: 0

摘要

一阶方法(即只允许一阶导数)有可能二次收敛吗?对于单变量损失函数,答案是肯定的--Steffensen 方法避免了二阶导数,仍然像牛顿方法一样具有二次收敛性。通过加入特定的步长,我们甚至可以把它的收敛阶数从二次收敛提高到(1+\sqrt{2} \约 2.414)。对于一个确定性算法来说,如此高的收敛阶数是毫无意义的矫枉过正,但当算法被随机化以处理大规模问题时,它们就变得有价值了,因为随机化无形中会影响收敛速度。我们将介绍两种受 Steffensen 方法启发的自适应学习率,它们适用于随机优化环境,除了批量大小外无需调整超参数。广泛的实验表明,这两种方法与现有的几种一阶方法相比毫不逊色。当局限于二次目标时,我们的随机 Steffensen 方法会简化为随机 Kaczmarz 方法--请注意,SGD 或 SLBFGS 并非如此--因此,我们也可以将我们的方法视为随机 Kaczmarz 对任意目标的泛化。
本文章由计算机程序翻译,如有差异,请以英文原文为准。

Stochastic Steffensen method

Stochastic Steffensen method

Is it possible for a first-order method, i.e., only first derivatives allowed, to be quadratically convergent? For univariate loss functions, the answer is yes—the Steffensen method avoids second derivatives and is still quadratically convergent like Newton method. By incorporating a specific step size we can even push its convergence order beyond quadratic to \(1+\sqrt{2} \approx 2.414\). While such high convergence orders are a pointless overkill for a deterministic algorithm, they become rewarding when the algorithm is randomized for problems of massive sizes, as randomization invariably compromises convergence speed. We will introduce two adaptive learning rates inspired by the Steffensen method, intended for use in a stochastic optimization setting and requires no hyperparameter tuning aside from batch size. Extensive experiments show that they compare favorably with several existing first-order methods. When restricted to a quadratic objective, our stochastic Steffensen methods reduce to randomized Kaczmarz method—note that this is not true for SGD or SLBFGS—and thus we may also view our methods as a generalization of randomized Kaczmarz to arbitrary objectives.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
3.70
自引率
9.10%
发文量
91
审稿时长
10 months
期刊介绍: Computational Optimization and Applications is a peer reviewed journal that is committed to timely publication of research and tutorial papers on the analysis and development of computational algorithms and modeling technology for optimization. Algorithms either for general classes of optimization problems or for more specific applied problems are of interest. Stochastic algorithms as well as deterministic algorithms will be considered. Papers that can provide both theoretical analysis, along with carefully designed computational experiments, are particularly welcome. Topics of interest include, but are not limited to the following: Large Scale Optimization, Unconstrained Optimization, Linear Programming, Quadratic Programming Complementarity Problems, and Variational Inequalities, Constrained Optimization, Nondifferentiable Optimization, Integer Programming, Combinatorial Optimization, Stochastic Optimization, Multiobjective Optimization, Network Optimization, Complexity Theory, Approximations and Error Analysis, Parametric Programming and Sensitivity Analysis, Parallel Computing, Distributed Computing, and Vector Processing, Software, Benchmarks, Numerical Experimentation and Comparisons, Modelling Languages and Systems for Optimization, Automatic Differentiation, Applications in Engineering, Finance, Optimal Control, Optimal Design, Operations Research, Transportation, Economics, Communications, Manufacturing, and Management Science.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信