Computational Optimization and Applications最新文献

筛选
英文 中文
A family of conjugate gradient methods with guaranteed positiveness and descent for vector optimization 保证正向性和下降的共轭梯度法系列,用于矢量优化
IF 2.2 2区 数学
Computational Optimization and Applications Pub Date : 2024-09-17 DOI: 10.1007/s10589-024-00609-0
Qing-Rui He, Sheng-Jie Li, Bo-Ya Zhang, Chun-Rong Chen
{"title":"A family of conjugate gradient methods with guaranteed positiveness and descent for vector optimization","authors":"Qing-Rui He, Sheng-Jie Li, Bo-Ya Zhang, Chun-Rong Chen","doi":"10.1007/s10589-024-00609-0","DOIUrl":"https://doi.org/10.1007/s10589-024-00609-0","url":null,"abstract":"<p>In this paper, we seek a new modification way to ensure the positiveness of the conjugate parameter and, based on the Dai-Yuan (DY) method in the vector setting, propose an associated family of conjugate gradient (CG) methods with guaranteed descent for solving unconstrained vector optimization problems. Several special members of the family are analyzed and the (sufficient) descent condition is established for them (in the vector sense). Under mild conditions, a general convergence result for the CG methods with specific parameters is presented, which, in particular, covers the global convergence of the aforementioned members. Furthermore, for the purpose of comparison, we then consider the direct extension versions of some Dai-Yuan type methods which are obtained by modifying the DY method of the scalar case. These vector extensions can retrieve the classical parameters in the scalar minimization case and their descent property and global convergence are also studied under mild assumptions. Finally, numerical experiments are given to illustrate the practical behavior of all proposed methods.</p>","PeriodicalId":55227,"journal":{"name":"Computational Optimization and Applications","volume":null,"pages":null},"PeriodicalIF":2.2,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142256765","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Convergence of a quasi-Newton method for solving systems of nonlinear underdetermined equations 求解非线性欠定方程组的准牛顿法的收敛性
IF 2.2 2区 数学
Computational Optimization and Applications Pub Date : 2024-09-06 DOI: 10.1007/s10589-024-00606-3
N. Vater, A. Borzì
{"title":"Convergence of a quasi-Newton method for solving systems of nonlinear underdetermined equations","authors":"N. Vater, A. Borzì","doi":"10.1007/s10589-024-00606-3","DOIUrl":"https://doi.org/10.1007/s10589-024-00606-3","url":null,"abstract":"<p>The development and convergence analysis of a quasi-Newton method for the solution of systems of nonlinear underdetermined equations is investigated. These equations arise in many application fields, e.g., supervised learning of large overparameterised neural networks, which require the development of efficient methods with guaranteed convergence. In this paper, a new approach for the computation of the Moore–Penrose inverse of the approximate Jacobian coming from the Broyden update is presented and a semi-local convergence result for a damped quasi-Newton method is proved. The theoretical results are illustrated in detail for the case of systems of multidimensional quadratic equations, and validated in the context of eigenvalue problems and supervised learning of overparameterised neural networks.</p>","PeriodicalId":55227,"journal":{"name":"Computational Optimization and Applications","volume":null,"pages":null},"PeriodicalIF":2.2,"publicationDate":"2024-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142199602","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Scaled-PAKKT sequential optimality condition for multiobjective problems and its application to an Augmented Lagrangian method 多目标问题的 Scaled-PAKKT 顺序最优条件及其在增强拉格朗日法中的应用
IF 2.2 2区 数学
Computational Optimization and Applications Pub Date : 2024-09-02 DOI: 10.1007/s10589-024-00605-4
G. A. Carrizo, N. S. Fazzio, M. D. Sánchez, M. L. Schuverdt
{"title":"Scaled-PAKKT sequential optimality condition for multiobjective problems and its application to an Augmented Lagrangian method","authors":"G. A. Carrizo, N. S. Fazzio, M. D. Sánchez, M. L. Schuverdt","doi":"10.1007/s10589-024-00605-4","DOIUrl":"https://doi.org/10.1007/s10589-024-00605-4","url":null,"abstract":"<p>Based on the recently introduced Scaled Positive Approximate Karush–Kuhn–Tucker condition for single objective problems, we derive a sequential necessary optimality condition for multiobjective problems with equality and inequality constraints as well as additional abstract set constraints. These necessary sequential optimality conditions for multiobjective problems are subject to the same requirements as ordinary (pointwise) optimization conditions: we show that the updated Scaled Positive Approximate Karush–Kuhn–Tucker condition is necessary for a local weak Pareto point to the problem. Furthermore, we propose a variant of the classical Augmented Lagrangian method for multiobjective problems. Our theoretical framework does not require any scalarization. We also discuss the convergence properties of our algorithm with regard to feasibility and global optimality without any convexity assumption. Finally, some numerical results are given to illustrate the practical viability of the method.</p>","PeriodicalId":55227,"journal":{"name":"Computational Optimization and Applications","volume":null,"pages":null},"PeriodicalIF":2.2,"publicationDate":"2024-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142199603","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Newton-CG based barrier-augmented Lagrangian method for general nonconvex conic optimization 基于牛顿-CG 的一般非凸圆锥优化的障碍增强拉格朗日方法
IF 2.2 2区 数学
Computational Optimization and Applications Pub Date : 2024-08-30 DOI: 10.1007/s10589-024-00603-6
Chuan He, Heng Huang, Zhaosong Lu
{"title":"A Newton-CG based barrier-augmented Lagrangian method for general nonconvex conic optimization","authors":"Chuan He, Heng Huang, Zhaosong Lu","doi":"10.1007/s10589-024-00603-6","DOIUrl":"https://doi.org/10.1007/s10589-024-00603-6","url":null,"abstract":"<p>In this paper we consider finding an approximate second-order stationary point (SOSP) of general nonconvex conic optimization that minimizes a twice differentiable function subject to nonlinear equality constraints and also a convex conic constraint. In particular, we propose a Newton-conjugate gradient (Newton-CG) based barrier-augmented Lagrangian method for finding an approximate SOSP of this problem. Under some mild assumptions, we show that our method enjoys a total inner iteration complexity of <span>({widetilde{{{,mathrm{mathcal {O}},}}}}(epsilon ^{-11/2}))</span> and an operation complexity of <span>({widetilde{{{,mathrm{mathcal {O}},}}}}(epsilon ^{-11/2}min {n,epsilon ^{-5/4}}))</span> for finding an <span>((epsilon ,sqrt{epsilon }))</span>-SOSP of general nonconvex conic optimization with high probability. Moreover, under a constraint qualification, these complexity bounds are improved to <span>({widetilde{{{,mathrm{mathcal {O}},}}}}(epsilon ^{-7/2}))</span> and <span>({widetilde{{{,mathrm{mathcal {O}},}}}}(epsilon ^{-7/2}min {n,epsilon ^{-3/4}}))</span>, respectively. To the best of our knowledge, this is the first study on the complexity of finding an approximate SOSP of general nonconvex conic optimization. Preliminary numerical results are presented to demonstrate superiority of the proposed method over first-order methods in terms of solution quality.</p>","PeriodicalId":55227,"journal":{"name":"Computational Optimization and Applications","volume":null,"pages":null},"PeriodicalIF":2.2,"publicationDate":"2024-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142199604","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Robust approximation of chance constrained optimization with polynomial perturbation 用多项式扰动对机会约束优化进行稳健逼近
IF 2.2 2区 数学
Computational Optimization and Applications Pub Date : 2024-08-28 DOI: 10.1007/s10589-024-00602-7
Bo Rao, Liu Yang, Suhan Zhong, Guangming Zhou
{"title":"Robust approximation of chance constrained optimization with polynomial perturbation","authors":"Bo Rao, Liu Yang, Suhan Zhong, Guangming Zhou","doi":"10.1007/s10589-024-00602-7","DOIUrl":"https://doi.org/10.1007/s10589-024-00602-7","url":null,"abstract":"<p>This paper proposes a robust approximation method for solving chance constrained optimization (CCO) of polynomials. Assume the CCO is defined with an individual chance constraint that is affine in the decision variables. We construct a robust approximation by replacing the chance constraint with a robust constraint over an uncertainty set. When the objective function is linear or SOS-convex, the robust approximation can be equivalently transformed into linear conic optimization. Semidefinite relaxation algorithms are proposed to solve these linear conic transformations globally and their convergent properties are studied. We also introduce a heuristic method to find efficient uncertainty sets such that optimizers of the robust approximation are feasible to the original problem. Numerical experiments are given to show the efficiency of our method.</p>","PeriodicalId":55227,"journal":{"name":"Computational Optimization and Applications","volume":null,"pages":null},"PeriodicalIF":2.2,"publicationDate":"2024-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142199608","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A power-like method for finding the spectral radius of a weakly irreducible nonnegative symmetric tensor 求弱不可还原非负对称张量谱半径的类幂方法
IF 2.2 2区 数学
Computational Optimization and Applications Pub Date : 2024-08-17 DOI: 10.1007/s10589-024-00601-8
Xueli Bai, Dong-Hui Li, Lei Wu, Jiefeng Xu
{"title":"A power-like method for finding the spectral radius of a weakly irreducible nonnegative symmetric tensor","authors":"Xueli Bai, Dong-Hui Li, Lei Wu, Jiefeng Xu","doi":"10.1007/s10589-024-00601-8","DOIUrl":"https://doi.org/10.1007/s10589-024-00601-8","url":null,"abstract":"<p>The Perron–Frobenius theorem says that the spectral radius of a weakly irreducible nonnegative tensor is the unique positive eigenvalue corresponding to a positive eigenvector. With this fact in mind, the purpose of this paper is to find the spectral radius and its corresponding positive eigenvector of a weakly irreducible nonnegative symmetric tensor. By transforming the eigenvalue problem into an equivalent problem of minimizing a concave function on a closed convex set, we derive a simpler and cheaper iterative method called power-like method, which is well-defined. Furthermore, we show that both sequences of the eigenvalue estimates and the eigenvector evaluations generated by the power-like method <i>Q</i>-linearly converge to the spectral radius and its corresponding eigenvector, respectively. To accelerate the method, we introduce a line search technique. The improved method retains the same convergence property as the original version. Plentiful numerical results show that the improved method performs quite well.</p>","PeriodicalId":55227,"journal":{"name":"Computational Optimization and Applications","volume":null,"pages":null},"PeriodicalIF":2.2,"publicationDate":"2024-08-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142199606","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An inexact regularized proximal Newton method without line search 无需直线搜索的非精确正则近似牛顿法
IF 2.2 2区 数学
Computational Optimization and Applications Pub Date : 2024-08-16 DOI: 10.1007/s10589-024-00600-9
Simeon vom Dahl, Christian Kanzow
{"title":"An inexact regularized proximal Newton method without line search","authors":"Simeon vom Dahl, Christian Kanzow","doi":"10.1007/s10589-024-00600-9","DOIUrl":"https://doi.org/10.1007/s10589-024-00600-9","url":null,"abstract":"<p>In this paper, we introduce an inexact regularized proximal Newton method (IRPNM) that does not require any line search. The method is designed to minimize the sum of a twice continuously differentiable function <i>f</i> and a convex (possibly non-smooth and extended-valued) function <span>(varphi )</span>. Instead of controlling a step size by a line search procedure, we update the regularization parameter in a suitable way, based on the success of the previous iteration. The global convergence of the sequence of iterations and its superlinear convergence rate under a local Hölderian error bound assumption are shown. Notably, these convergence results are obtained without requiring a global Lipschitz property for <span>( nabla f )</span>, which, to the best of the authors’ knowledge, is a novel contribution for proximal Newton methods. To highlight the efficiency of our approach, we provide numerical comparisons with an IRPNM using a line search globalization and a modern FISTA-type method.</p>","PeriodicalId":55227,"journal":{"name":"Computational Optimization and Applications","volume":null,"pages":null},"PeriodicalIF":2.2,"publicationDate":"2024-08-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142199607","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The weighted Euclidean one-center problem in $${mathbb {R}}^n$$ $${mathbb {R}}^n$ 中的加权欧几里得单中心问题
IF 1.6 2区 数学
Computational Optimization and Applications Pub Date : 2024-08-08 DOI: 10.1007/s10589-024-00599-z
M. Cawood, P. Dearing
{"title":"The weighted Euclidean one-center problem in $${mathbb {R}}^n$$","authors":"M. Cawood, P. Dearing","doi":"10.1007/s10589-024-00599-z","DOIUrl":"https://doi.org/10.1007/s10589-024-00599-z","url":null,"abstract":"","PeriodicalId":55227,"journal":{"name":"Computational Optimization and Applications","volume":null,"pages":null},"PeriodicalIF":1.6,"publicationDate":"2024-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141929508","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A block-coordinate approach of multi-level optimization with an application to physics-informed neural networks 应用于物理信息神经网络的多级优化块坐标方法
IF 2.2 2区 数学
Computational Optimization and Applications Pub Date : 2024-08-06 DOI: 10.1007/s10589-024-00597-1
Serge Gratton, Valentin Mercier, Elisa Riccietti, Philippe L. Toint
{"title":"A block-coordinate approach of multi-level optimization with an application to physics-informed neural networks","authors":"Serge Gratton, Valentin Mercier, Elisa Riccietti, Philippe L. Toint","doi":"10.1007/s10589-024-00597-1","DOIUrl":"https://doi.org/10.1007/s10589-024-00597-1","url":null,"abstract":"<p>Multi-level methods are widely used for the solution of large-scale problems, because of their computational advantages and exploitation of the complementarity between the involved sub-problems. After a re-interpretation of multi-level methods from a block-coordinate point of view, we propose a multi-level algorithm for the solution of nonlinear optimization problems and analyze its evaluation complexity. We apply it to the solution of partial differential equations using physics-informed neural networks (PINNs) and consider two different types of neural architectures, a generic feedforward network and a frequency-aware network. We show that our approach is particularly effective if coupled with these specialized architectures and that this coupling results in better solutions and significant computational savings.</p>","PeriodicalId":55227,"journal":{"name":"Computational Optimization and Applications","volume":null,"pages":null},"PeriodicalIF":2.2,"publicationDate":"2024-08-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141944946","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Full-low evaluation methods for bound and linearly constrained derivative-free optimization 约束和线性约束无导数优化的全低评估方法
IF 2.2 2区 数学
Computational Optimization and Applications Pub Date : 2024-08-01 DOI: 10.1007/s10589-024-00596-2
C. W. Royer, O. Sohab, L. N. Vicente
{"title":"Full-low evaluation methods for bound and linearly constrained derivative-free optimization","authors":"C. W. Royer, O. Sohab, L. N. Vicente","doi":"10.1007/s10589-024-00596-2","DOIUrl":"https://doi.org/10.1007/s10589-024-00596-2","url":null,"abstract":"<p>Derivative-free optimization (DFO) consists in finding the best value of an objective function without relying on derivatives. To tackle such problems, one may build approximate derivatives, using for instance finite-difference estimates. One may also design algorithmic strategies that perform space exploration and seek improvement over the current point. The first type of strategy often provides good performance on smooth problems but at the expense of more function evaluations. The second type is cheaper and typically handles non-smoothness or noise in the objective better. Recently, full-low evaluation methods have been proposed as a hybrid class of DFO algorithms that combine both strategies, respectively denoted as Full-Eval and Low-Eval. In the unconstrained case, these methods showed promising numerical performance. In this paper, we extend the full-low evaluation framework to bound and linearly constrained derivative-free optimization. We derive convergence results for an instance of this framework, that combines finite-difference quasi-Newton steps with probabilistic direct-search steps. The former are projected onto the feasible set, while the latter are defined within tangent cones identified by nearby active constraints. We illustrate the practical performance of our instance on standard linearly constrained problems, that we adapt to introduce noisy evaluations as well as non-smoothness. In all cases, our method performs favorably compared to algorithms that rely solely on Full-eval or Low-eval iterations.</p>","PeriodicalId":55227,"journal":{"name":"Computational Optimization and Applications","volume":null,"pages":null},"PeriodicalIF":2.2,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141880890","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信