随机搜索的渐近收敛速度

IF 1.8 3区 数学 Q1 Mathematics
Dawid Tarłowski
{"title":"随机搜索的渐近收敛速度","authors":"Dawid Tarłowski","doi":"10.1007/s10898-023-01342-4","DOIUrl":null,"url":null,"abstract":"<p>This paper presents general theoretical studies on asymptotic convergence rate (ACR) for finite dimensional optimization. Given the continuous problem function and discrete time stochastic optimization process, the ACR is the optimal constant for control of the asymptotic behaviour of the expected approximation errors. Under general assumptions, condition ACR<span>\\(&lt;1\\)</span> implies the linear behaviour of the expected time of hitting the <span>\\(\\varepsilon \\)</span>- optimal sublevel set with <span>\\(\\varepsilon \\rightarrow 0^+ \\)</span> and determines the upper bound for the convergence rate of the trajectories of the process. This paper provides general characterization of ACR and, in particular, shows that some algorithms cannot converge linearly fast for any nontrivial continuous optimization problem. The relation between asymptotic convergence rate in the objective space and asymptotic convergence rate in the search space is provided. Examples and numerical simulations with use of a (1+1) self-adaptive evolution strategy and other algorithms are presented.</p>","PeriodicalId":15961,"journal":{"name":"Journal of Global Optimization","volume":null,"pages":null},"PeriodicalIF":1.8000,"publicationDate":"2023-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"On asymptotic convergence rate of random search\",\"authors\":\"Dawid Tarłowski\",\"doi\":\"10.1007/s10898-023-01342-4\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>This paper presents general theoretical studies on asymptotic convergence rate (ACR) for finite dimensional optimization. Given the continuous problem function and discrete time stochastic optimization process, the ACR is the optimal constant for control of the asymptotic behaviour of the expected approximation errors. Under general assumptions, condition ACR<span>\\\\(&lt;1\\\\)</span> implies the linear behaviour of the expected time of hitting the <span>\\\\(\\\\varepsilon \\\\)</span>- optimal sublevel set with <span>\\\\(\\\\varepsilon \\\\rightarrow 0^+ \\\\)</span> and determines the upper bound for the convergence rate of the trajectories of the process. This paper provides general characterization of ACR and, in particular, shows that some algorithms cannot converge linearly fast for any nontrivial continuous optimization problem. The relation between asymptotic convergence rate in the objective space and asymptotic convergence rate in the search space is provided. Examples and numerical simulations with use of a (1+1) self-adaptive evolution strategy and other algorithms are presented.</p>\",\"PeriodicalId\":15961,\"journal\":{\"name\":\"Journal of Global Optimization\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":1.8000,\"publicationDate\":\"2023-11-24\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Global Optimization\",\"FirstCategoryId\":\"100\",\"ListUrlMain\":\"https://doi.org/10.1007/s10898-023-01342-4\",\"RegionNum\":3,\"RegionCategory\":\"数学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"Mathematics\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Global Optimization","FirstCategoryId":"100","ListUrlMain":"https://doi.org/10.1007/s10898-023-01342-4","RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"Mathematics","Score":null,"Total":0}
引用次数: 0

摘要

本文介绍了有限维优化的渐近收敛率的一般理论研究。给定连续问题函数和离散时间随机优化过程,ACR是控制期望逼近误差渐近行为的最优常数。在一般假设下,条件ACR \(<1\)暗示了用\(\varepsilon \rightarrow 0^+ \)到达\(\varepsilon \) -最优子水平集的期望时间的线性行为,并决定了过程轨迹收敛速度的上界。本文给出了ACR的一般性质,并特别证明了对于任何非平凡连续优化问题,某些算法不能快速线性收敛。给出了目标空间的渐近收敛速率与搜索空间的渐近收敛速率之间的关系。给出了使用(1+1)自适应进化策略和其他算法的实例和数值模拟。
本文章由计算机程序翻译,如有差异,请以英文原文为准。

On asymptotic convergence rate of random search

On asymptotic convergence rate of random search

This paper presents general theoretical studies on asymptotic convergence rate (ACR) for finite dimensional optimization. Given the continuous problem function and discrete time stochastic optimization process, the ACR is the optimal constant for control of the asymptotic behaviour of the expected approximation errors. Under general assumptions, condition ACR\(<1\) implies the linear behaviour of the expected time of hitting the \(\varepsilon \)- optimal sublevel set with \(\varepsilon \rightarrow 0^+ \) and determines the upper bound for the convergence rate of the trajectories of the process. This paper provides general characterization of ACR and, in particular, shows that some algorithms cannot converge linearly fast for any nontrivial continuous optimization problem. The relation between asymptotic convergence rate in the objective space and asymptotic convergence rate in the search space is provided. Examples and numerical simulations with use of a (1+1) self-adaptive evolution strategy and other algorithms are presented.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Journal of Global Optimization
Journal of Global Optimization 数学-应用数学
CiteScore
0.10
自引率
5.60%
发文量
137
审稿时长
6 months
期刊介绍: The Journal of Global Optimization publishes carefully refereed papers that encompass theoretical, computational, and applied aspects of global optimization. While the focus is on original research contributions dealing with the search for global optima of non-convex, multi-extremal problems, the journal’s scope covers optimization in the widest sense, including nonlinear, mixed integer, combinatorial, stochastic, robust, multi-objective optimization, computational geometry, and equilibrium problems. Relevant works on data-driven methods and optimization-based data mining are of special interest. In addition to papers covering theory and algorithms of global optimization, the journal publishes significant papers on numerical experiments, new testbeds, and applications in engineering, management, and the sciences. Applications of particular interest include healthcare, computational biochemistry, energy systems, telecommunications, and finance. Apart from full-length articles, the journal features short communications on both open and solved global optimization problems. It also offers reviews of relevant books and publishes special issues.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信