Computational Optimization and Applications最新文献

筛选
英文 中文
An adaptive regularized proximal Newton-type methods for composite optimization over the Stiefel manifold 用于斯特菲尔流形上复合优化的自适应正则近端牛顿型方法
IF 2.2 2区 数学
Computational Optimization and Applications Pub Date : 2024-07-26 DOI: 10.1007/s10589-024-00595-3
Qinsi Wang, Wei Hong Yang
{"title":"An adaptive regularized proximal Newton-type methods for composite optimization over the Stiefel manifold","authors":"Qinsi Wang, Wei Hong Yang","doi":"10.1007/s10589-024-00595-3","DOIUrl":"https://doi.org/10.1007/s10589-024-00595-3","url":null,"abstract":"<p>Recently, the proximal Newton-type method and its variants have been generalized to solve composite optimization problems over the Stiefel manifold whose objective function is the summation of a smooth function and a nonsmooth function. In this paper, we propose an adaptive quadratically regularized proximal quasi-Newton method, named ARPQN, to solve this class of problems. Under some mild assumptions, the global convergence, the local linear convergence rate and the iteration complexity of ARPQN are established. Numerical experiments and comparisons with other state-of-the-art methods indicate that ARPQN is very promising. We also propose an adaptive quadratically regularized proximal Newton method, named ARPN. It is shown the ARPN method has a local superlinear convergence rate under certain reasonable assumptions, which demonstrates attractive convergence properties of regularized proximal Newton methods.\u0000</p>","PeriodicalId":55227,"journal":{"name":"Computational Optimization and Applications","volume":null,"pages":null},"PeriodicalIF":2.2,"publicationDate":"2024-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141780194","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Dynamic stochastic projection method for multistage stochastic variational inequalities 多阶段随机变分不等式的动态随机投影法
IF 2.2 2区 数学
Computational Optimization and Applications Pub Date : 2024-07-26 DOI: 10.1007/s10589-024-00594-4
Bin Zhou, Jie Jiang, Hailin Sun
{"title":"Dynamic stochastic projection method for multistage stochastic variational inequalities","authors":"Bin Zhou, Jie Jiang, Hailin Sun","doi":"10.1007/s10589-024-00594-4","DOIUrl":"https://doi.org/10.1007/s10589-024-00594-4","url":null,"abstract":"<p>Stochastic approximation (SA) type methods have been well studied for solving single-stage stochastic variational inequalities (SVIs). This paper proposes a dynamic stochastic projection method (DSPM) for solving multistage SVIs. In particular, we investigate an inexact single-stage SVI and present an inexact stochastic projection method (ISPM) for solving it. Then we give the DSPM to a three-stage SVI by applying the ISPM to each stage. We show that the DSPM can achieve an <span>(mathcal {O}(frac{1}{epsilon ^2}))</span> convergence rate regarding to the total number of required scenarios for the three-stage SVI. We also extend the DSPM to the multistage SVI when the number of stages is larger than three. The numerical experiments illustrate the effectiveness and efficiency of the DSPM.</p>","PeriodicalId":55227,"journal":{"name":"Computational Optimization and Applications","volume":null,"pages":null},"PeriodicalIF":2.2,"publicationDate":"2024-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141780193","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Extragradient method with feasible inexact projection to variational inequality problem 针对变分不等式问题的可行不精确投影外梯度法
IF 2.2 2区 数学
Computational Optimization and Applications Pub Date : 2024-07-26 DOI: 10.1007/s10589-024-00592-6
R. Díaz Millán, O. P. Ferreira, J. Ugon
{"title":"Extragradient method with feasible inexact projection to variational inequality problem","authors":"R. Díaz Millán, O. P. Ferreira, J. Ugon","doi":"10.1007/s10589-024-00592-6","DOIUrl":"https://doi.org/10.1007/s10589-024-00592-6","url":null,"abstract":"<p>The variational inequality problem in finite-dimensional Euclidean space is addressed in this paper, and two inexact variants of the extragradient method are proposed to solve it. Instead of computing exact projections on the constraint set, as in previous versions extragradient method, the proposed methods compute feasible inexact projections on the constraint set using a relative error criterion. The first version of the proposed method provided is a counterpart to the classic form of the extragradient method with constant steps. In order to establish its convergence we need to assume that the operator is pseudo-monotone and Lipschitz continuous, as in the standard approach. For the second version, instead of a fixed step size, the method presented finds a suitable step size in each iteration by performing a line search. Like the classical extragradient method, the proposed method does just two projections into the feasible set in each iteration. A full convergence analysis is provided, with no Lipschitz continuity assumption of the operator defining the variational inequality problem.</p>","PeriodicalId":55227,"journal":{"name":"Computational Optimization and Applications","volume":null,"pages":null},"PeriodicalIF":2.2,"publicationDate":"2024-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141780195","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Handling of constraints in multiobjective blackbox optimization 多目标黑箱优化中的约束处理
IF 2.2 2区 数学
Computational Optimization and Applications Pub Date : 2024-07-16 DOI: 10.1007/s10589-024-00588-2
Jean Bigeon, Sébastien Le Digabel, Ludovic Salomon
{"title":"Handling of constraints in multiobjective blackbox optimization","authors":"Jean Bigeon, Sébastien Le Digabel, Ludovic Salomon","doi":"10.1007/s10589-024-00588-2","DOIUrl":"https://doi.org/10.1007/s10589-024-00588-2","url":null,"abstract":"<p>This work proposes the integration of two new constraint-handling approaches into the blackbox constrained multiobjective optimization algorithm DMulti-MADS, an extension of the Mesh Adaptive Direct Search (MADS) algorithm for single-objective constrained optimization. The constraints are aggregated into a single constraint violation function which is used either in a two-phase approach, where the search for a feasible point is prioritized if not available before improving the current solution set, or in a progressive barrier approach, where any trial point whose constraint violation function values are above a threshold are rejected. This threshold is progressively decreased along the iterations. As in the single-objective case, it is proved that these two variants generate feasible and/or infeasible sequences which converge either in the feasible case to a set of local Pareto optimal points or in the infeasible case to Clarke stationary points according to the constraint violation function. Computational experiments show that these two approaches are competitive with other state-of-the-art algorithms.</p>","PeriodicalId":55227,"journal":{"name":"Computational Optimization and Applications","volume":null,"pages":null},"PeriodicalIF":2.2,"publicationDate":"2024-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141721903","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Eigenvalue programming beyond matrices 超越矩阵的特征值编程
IF 2.2 2区 数学
Computational Optimization and Applications Pub Date : 2024-07-10 DOI: 10.1007/s10589-024-00591-7
Masaru Ito, Bruno F. Lourenço
{"title":"Eigenvalue programming beyond matrices","authors":"Masaru Ito, Bruno F. Lourenço","doi":"10.1007/s10589-024-00591-7","DOIUrl":"https://doi.org/10.1007/s10589-024-00591-7","url":null,"abstract":"<p>In this paper we analyze and solve eigenvalue programs, which consist of the task of minimizing a function subject to constraints on the “eigenvalues” of the decision variable. Here, by making use of the FTvN systems framework introduced by Gowda, we interpret “eigenvalues” in a broad fashion going beyond the usual eigenvalues of matrices. This allows us to shed new light on classical problems such as inverse eigenvalue problems and also leads to new applications. In particular, after analyzing and developing a simple projected gradient algorithm for general eigenvalue programs, we show that eigenvalue programs can be used to express what we call <i>vanishing quadratic constraints</i>. A vanishing quadratic constraint requires that a given system of convex quadratic inequalities be satisfied and at least a certain number of those inequalities must be tight. As a particular case, this includes the problem of finding a point <i>x</i> in the intersection of <i>m</i> ellipsoids in such a way that <i>x</i> is also in the boundary of at least <span>(ell )</span> of the ellipsoids, for some fixed <span>(ell &gt; 0)</span>. At the end, we also present some numerical experiments.</p>","PeriodicalId":55227,"journal":{"name":"Computational Optimization and Applications","volume":null,"pages":null},"PeriodicalIF":2.2,"publicationDate":"2024-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141587285","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Q-fully quadratic modeling and its application in a random subspace derivative-free method 全二次方建模及其在随机子空间无导数方法中的应用
IF 2.2 2区 数学
Computational Optimization and Applications Pub Date : 2024-06-20 DOI: 10.1007/s10589-024-00590-8
Yiwen Chen, Warren Hare, Amy Wiebe
{"title":"Q-fully quadratic modeling and its application in a random subspace derivative-free method","authors":"Yiwen Chen, Warren Hare, Amy Wiebe","doi":"10.1007/s10589-024-00590-8","DOIUrl":"https://doi.org/10.1007/s10589-024-00590-8","url":null,"abstract":"<p>Model-based derivative-free optimization (DFO) methods are an important class of DFO methods that are known to struggle with solving high-dimensional optimization problems. Recent research has shown that incorporating random subspaces into model-based DFO methods has the potential to improve their performance on high-dimensional problems. However, most of the current theoretical and practical results are based on linear approximation models due to the complexity of quadratic approximation models. This paper proposes a random subspace trust-region algorithm based on quadratic approximations. Unlike most of its precursors, this algorithm does not require any special form of objective function. We study the geometry of sample sets, the error bounds for approximations, and the quality of subspaces. In particular, we provide a technique to construct <i>Q</i>-fully quadratic models, which is easy to analyze and implement. We present an almost-sure global convergence result of our algorithm and give an upper bound on the expected number of iterations to find a sufficiently small gradient. We also develop numerical experiments to compare the performance of our algorithm using both linear and quadratic approximation models. The numerical results demonstrate the strengths and weaknesses of using quadratic approximations.</p>","PeriodicalId":55227,"journal":{"name":"Computational Optimization and Applications","volume":null,"pages":null},"PeriodicalIF":2.2,"publicationDate":"2024-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141505104","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A nonsmooth primal-dual method with interwoven PDE constraint solver 带有交织 PDE 约束求解器的非平滑原始二元法
IF 2.2 2区 数学
Computational Optimization and Applications Pub Date : 2024-06-08 DOI: 10.1007/s10589-024-00587-3
Bjørn Jensen, Tuomo Valkonen
{"title":"A nonsmooth primal-dual method with interwoven PDE constraint solver","authors":"Bjørn Jensen, Tuomo Valkonen","doi":"10.1007/s10589-024-00587-3","DOIUrl":"https://doi.org/10.1007/s10589-024-00587-3","url":null,"abstract":"<p>We introduce an efficient first-order primal-dual method for the solution of nonsmooth PDE-constrained optimization problems. We achieve this efficiency through <i>not</i> solving the PDE or its linearisation on each iteration of the optimization method. Instead, we run the method interwoven with a simple conventional linear system solver (Jacobi, Gauss–Seidel, conjugate gradients), always taking only <i>one step</i> of the linear system solver for each step of the optimization method. The control parameter is updated on each iteration as determined by the optimization method. We prove linear convergence under a second-order growth condition, and numerically demonstrate the performance on a variety of PDEs related to inverse problems involving boundary measurements.</p>","PeriodicalId":55227,"journal":{"name":"Computational Optimization and Applications","volume":null,"pages":null},"PeriodicalIF":2.2,"publicationDate":"2024-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141529143","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Stochastic Steffensen method 随机斯蒂芬森法
IF 2.2 2区 数学
Computational Optimization and Applications Pub Date : 2024-06-07 DOI: 10.1007/s10589-024-00583-7
Minda Zhao, Zehua Lai, Lek-Heng Lim
{"title":"Stochastic Steffensen method","authors":"Minda Zhao, Zehua Lai, Lek-Heng Lim","doi":"10.1007/s10589-024-00583-7","DOIUrl":"https://doi.org/10.1007/s10589-024-00583-7","url":null,"abstract":"<p>Is it possible for a first-order method, i.e., only first derivatives allowed, to be quadratically convergent? For univariate loss functions, the answer is yes—the <i>Steffensen method</i> avoids second derivatives and is still quadratically convergent like Newton method. By incorporating a specific step size we can even push its convergence order beyond quadratic to <span>(1+sqrt{2} approx 2.414)</span>. While such high convergence orders are a pointless overkill for a deterministic algorithm, they become rewarding when the algorithm is randomized for problems of massive sizes, as randomization invariably compromises convergence speed. We will introduce two adaptive learning rates inspired by the Steffensen method, intended for use in a stochastic optimization setting and requires no hyperparameter tuning aside from batch size. Extensive experiments show that they compare favorably with several existing first-order methods. When restricted to a quadratic objective, our stochastic Steffensen methods reduce to randomized Kaczmarz method—note that this is not true for SGD or SLBFGS—and thus we may also view our methods as a generalization of randomized Kaczmarz to arbitrary objectives.</p>","PeriodicalId":55227,"journal":{"name":"Computational Optimization and Applications","volume":null,"pages":null},"PeriodicalIF":2.2,"publicationDate":"2024-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141548136","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Polynomial worst-case iteration complexity of quasi-Newton primal-dual interior point algorithms for linear programming 线性规划准牛顿原始双内点算法的多项式最坏迭代复杂度
IF 2.2 2区 数学
Computational Optimization and Applications Pub Date : 2024-06-07 DOI: 10.1007/s10589-024-00584-6
Jacek Gondzio, Francisco N. C. Sobral
{"title":"Polynomial worst-case iteration complexity of quasi-Newton primal-dual interior point algorithms for linear programming","authors":"Jacek Gondzio, Francisco N. C. Sobral","doi":"10.1007/s10589-024-00584-6","DOIUrl":"https://doi.org/10.1007/s10589-024-00584-6","url":null,"abstract":"<p>Quasi-Newton methods are well known techniques for large-scale numerical optimization. They use an approximation of the Hessian in optimization problems or the Jacobian in system of nonlinear equations. In the Interior Point context, quasi-Newton algorithms compute low-rank updates of the matrix associated with the Newton systems, instead of computing it from scratch at every iteration. In this work, we show that a simplified quasi-Newton primal-dual interior point algorithm for linear programming, which alternates between Newton and quasi-Newton iterations, enjoys polynomial worst-case iteration complexity. Feasible and infeasible cases of the algorithm are considered and the most common neighborhoods of the central path are analyzed. To the best of our knowledge, this is the first attempt to deliver polynomial worst-case iteration complexity bounds for these methods. Unsurprisingly, the worst-case complexity results obtained when quasi-Newton directions are used are worse than their counterparts when Newton directions are employed. However, quasi-Newton updates are very attractive for large-scale optimization problems where the cost of factorizing the matrices is much higher than the cost of solving linear systems.</p>","PeriodicalId":55227,"journal":{"name":"Computational Optimization and Applications","volume":null,"pages":null},"PeriodicalIF":2.2,"publicationDate":"2024-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141548135","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Projection free methods on product domains 乘积域上的无投影方法
IF 2.2 2区 数学
Computational Optimization and Applications Pub Date : 2024-06-04 DOI: 10.1007/s10589-024-00585-5
Immanuel Bomze, Francesco Rinaldi, Damiano Zeffiro
{"title":"Projection free methods on product domains","authors":"Immanuel Bomze, Francesco Rinaldi, Damiano Zeffiro","doi":"10.1007/s10589-024-00585-5","DOIUrl":"https://doi.org/10.1007/s10589-024-00585-5","url":null,"abstract":"<p>Projection-free block-coordinate methods avoid high computational cost per iteration, and at the same time exploit the particular problem structure of product domains. Frank–Wolfe-like approaches rank among the most popular ones of this type. However, as observed in the literature, there was a gap between the classical Frank–Wolfe theory and the block-coordinate case, with no guarantees of linear convergence rates even for strongly convex objectives in the latter. Moreover, most of previous research concentrated on convex objectives. This study now deals also with the non-convex case and reduces above-mentioned theory gap, in combining a new, fully developed convergence theory with novel active set identification results which ensure that inherent sparsity of solutions can be exploited in an efficient way. Preliminary numerical experiments seem to justify our approach and also show promising results for obtaining global solutions in the non-convex case.</p>","PeriodicalId":55227,"journal":{"name":"Computational Optimization and Applications","volume":null,"pages":null},"PeriodicalIF":2.2,"publicationDate":"2024-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141258500","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信