Computational Optimization and Applications最新文献

筛选
英文 中文
Riemannian preconditioned algorithms for tensor completion via tensor ring decomposition 通过张量环分解实现张量补全的黎曼预处理算法
IF 2.2 2区 数学
Computational Optimization and Applications Pub Date : 2024-02-27 DOI: 10.1007/s10589-024-00559-7
Bin Gao, Renfeng Peng, Ya-xiang Yuan
{"title":"Riemannian preconditioned algorithms for tensor completion via tensor ring decomposition","authors":"Bin Gao, Renfeng Peng, Ya-xiang Yuan","doi":"10.1007/s10589-024-00559-7","DOIUrl":"https://doi.org/10.1007/s10589-024-00559-7","url":null,"abstract":"<p>We propose Riemannian preconditioned algorithms for the tensor completion problem via tensor ring decomposition. A new Riemannian metric is developed on the product space of the mode-2 unfolding matrices of the core tensors in tensor ring decomposition. The construction of this metric aims to approximate the Hessian of the cost function by its diagonal blocks, paving the way for various Riemannian optimization methods. Specifically, we propose the Riemannian gradient descent and Riemannian conjugate gradient algorithms. We prove that both algorithms globally converge to a stationary point. In the implementation, we exploit the tensor structure and adopt an economical procedure to avoid large matrix formulation and computation in gradients, which significantly reduces the computational cost. Numerical experiments on various synthetic and real-world datasets—movie ratings, hyperspectral images, and high-dimensional functions—suggest that the proposed algorithms have better or favorably comparable performance to other candidates.</p>","PeriodicalId":55227,"journal":{"name":"Computational Optimization and Applications","volume":null,"pages":null},"PeriodicalIF":2.2,"publicationDate":"2024-02-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139979554","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Convergence of successive linear programming algorithms for noisy functions 噪声函数的连续线性规划算法的收敛性
IF 2.2 2区 数学
Computational Optimization and Applications Pub Date : 2024-02-26 DOI: 10.1007/s10589-024-00564-w
Christoph Hansknecht, Christian Kirches, Paul Manns
{"title":"Convergence of successive linear programming algorithms for noisy functions","authors":"Christoph Hansknecht, Christian Kirches, Paul Manns","doi":"10.1007/s10589-024-00564-w","DOIUrl":"https://doi.org/10.1007/s10589-024-00564-w","url":null,"abstract":"<p>Gradient-based methods have been highly successful for solving a variety of both unconstrained and constrained nonlinear optimization problems. In real-world applications, such as optimal control or machine learning, the necessary function and derivative information may be corrupted by noise, however. Sun and Nocedal have recently proposed a remedy for smooth unconstrained problems by means of a stabilization of the acceptance criterion for computed iterates, which leads to convergence of the iterates of a trust-region method to a region of criticality (Sun and Nocedal in Math Program 66:1–28, 2023. https://doi.org/10.1007/s10107-023-01941-9). We extend their analysis to the successive linear programming algorithm (Byrd et al. in Math Program 100(1):27–48, 2003. https://doi.org/10.1007/s10107-003-0485-4, SIAM J Optim 16(2):471–489, 2005. https://doi.org/10.1137/S1052623403426532) for unconstrained optimization problems with objectives that can be characterized as the composition of a polyhedral function with a smooth function, where the latter and its gradient may be corrupted by noise. This gives the flexibility to cover, for example, (sub)problems arising in image reconstruction or constrained optimization algorithms. We provide computational examples that illustrate the findings and point to possible strategies for practical determination of the stabilization parameter that balances the size of the critical region with a relaxation of the acceptance criterion (or descent property) of the algorithm.</p>","PeriodicalId":55227,"journal":{"name":"Computational Optimization and Applications","volume":null,"pages":null},"PeriodicalIF":2.2,"publicationDate":"2024-02-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139969741","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
IPRSDP: a primal-dual interior-point relaxation algorithm for semidefinite programming IPRSDP:半定式编程的原始双内部点松弛算法
IF 2.2 2区 数学
Computational Optimization and Applications Pub Date : 2024-02-21 DOI: 10.1007/s10589-024-00558-8
Rui-Jin Zhang, Xin-Wei Liu, Yu-Hong Dai
{"title":"IPRSDP: a primal-dual interior-point relaxation algorithm for semidefinite programming","authors":"Rui-Jin Zhang, Xin-Wei Liu, Yu-Hong Dai","doi":"10.1007/s10589-024-00558-8","DOIUrl":"https://doi.org/10.1007/s10589-024-00558-8","url":null,"abstract":"<p>We propose an efficient primal-dual interior-point relaxation algorithm based on a smoothing barrier augmented Lagrangian, called IPRSDP, for solving semidefinite programming problems in this paper. The IPRSDP algorithm has three advantages over classical interior-point methods. Firstly, IPRSDP does not require the iterative points to be positive definite. Consequently, it can easily be combined with the warm-start technique used for solving many combinatorial optimization problems, which require the solutions of a series of semidefinite programming problems. Secondly, the search direction of IPRSDP is symmetric in itself, and hence the symmetrization procedure is not required any more. Thirdly, with the introduction of the smoothing barrier augmented Lagrangian function, IPRSDP can provide the explicit form of the Schur complement matrix. This enables the complexity of forming this matrix in IPRSDP to be comparable to or lower than that of many existing search directions. The global convergence of IPRSDP is established under suitable assumptions. Numerical experiments are made on the SDPLIB set, which demonstrate the efficiency of IPRSDP.</p>","PeriodicalId":55227,"journal":{"name":"Computational Optimization and Applications","volume":null,"pages":null},"PeriodicalIF":2.2,"publicationDate":"2024-02-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139927219","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A projected-search interior-point method for nonlinearly constrained optimization 非线性约束优化的投影搜索内点法
IF 2.2 2区 数学
Computational Optimization and Applications Pub Date : 2024-02-21 DOI: 10.1007/s10589-023-00549-1
Philip E. Gill, Minxin Zhang
{"title":"A projected-search interior-point method for nonlinearly constrained optimization","authors":"Philip E. Gill, Minxin Zhang","doi":"10.1007/s10589-023-00549-1","DOIUrl":"https://doi.org/10.1007/s10589-023-00549-1","url":null,"abstract":"<p>This paper concerns the formulation and analysis of a new interior-point method for constrained optimization that combines a shifted primal-dual interior-point method with a projected-search method for bound-constrained optimization. The method involves the computation of an approximate Newton direction for a primal-dual penalty-barrier function that incorporates shifts on both the primal and dual variables. Shifts on the dual variables allow the method to be safely “warm started” from a good approximate solution and avoids the possibility of very large solutions of the associated path-following equations. The approximate Newton direction is used in conjunction with a new projected-search line-search algorithm that employs a flexible non-monotone quasi-Armijo line search for the minimization of each penalty-barrier function. Numerical results are presented for a large set of constrained optimization problems. For comparison purposes, results are also given for two primal-dual interior-point methods that do not use projection. The first is a method that shifts both the primal and dual variables. The second is a method that involves shifts on the primal variables only. The results show that the use of both primal and dual shifts in conjunction with projection gives a method that is more robust and requires significantly fewer iterations. In particular, the number of times that the search direction must be computed is substantially reduced. Results from a set of quadratic programming test problems indicate that the method is particularly well-suited to solving the quadratic programming subproblem in a sequential quadratic programming method for nonlinear optimization.</p>","PeriodicalId":55227,"journal":{"name":"Computational Optimization and Applications","volume":null,"pages":null},"PeriodicalIF":2.2,"publicationDate":"2024-02-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139927309","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An infeasible interior-point arc-search method with Nesterov’s restarting strategy for linear programming problems 线性规划问题的不可行内部点弧线搜索法与涅斯捷罗夫重启策略
IF 2.2 2区 数学
Computational Optimization and Applications Pub Date : 2024-02-20 DOI: 10.1007/s10589-024-00561-z
Einosuke Iida, Makoto Yamashita
{"title":"An infeasible interior-point arc-search method with Nesterov’s restarting strategy for linear programming problems","authors":"Einosuke Iida, Makoto Yamashita","doi":"10.1007/s10589-024-00561-z","DOIUrl":"https://doi.org/10.1007/s10589-024-00561-z","url":null,"abstract":"<p>An arc-search interior-point method is a type of interior-point method that approximates the central path by an ellipsoidal arc, and it can often reduce the number of iterations. In this work, to further reduce the number of iterations and the computation time for solving linear programming problems, we propose two arc-search interior-point methods using Nesterov’s restarting strategy which is a well-known method to accelerate the gradient method with a momentum term. The first one generates a sequence of iterations in the neighborhood, and we prove that the proposed method converges to an optimal solution and that it is a polynomial-time method. The second one incorporates the concept of the Mehrotra-type interior-point method to improve numerical performance. The numerical experiments demonstrate that the second one reduced the number of iterations and the computational time compared to existing interior-point methods due to the momentum term.\u0000</p>","PeriodicalId":55227,"journal":{"name":"Computational Optimization and Applications","volume":null,"pages":null},"PeriodicalIF":2.2,"publicationDate":"2024-02-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139927251","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Convex mixed-integer nonlinear programs derived from generalized disjunctive programming using cones 利用锥形从广义非条件程序设计推导出的凸混合整数非线性程序
IF 2.2 2区 数学
Computational Optimization and Applications Pub Date : 2024-02-20 DOI: 10.1007/s10589-024-00557-9
David E. Bernal Neira, Ignacio E. Grossmann
{"title":"Convex mixed-integer nonlinear programs derived from generalized disjunctive programming using cones","authors":"David E. Bernal Neira, Ignacio E. Grossmann","doi":"10.1007/s10589-024-00557-9","DOIUrl":"https://doi.org/10.1007/s10589-024-00557-9","url":null,"abstract":"<p>We propose the formulation of convex Generalized Disjunctive Programming (GDP) problems using conic inequalities leading to conic GDP problems. We then show the reformulation of conic GDPs into Mixed-Integer Conic Programming (MICP) problems through both the big-M and hull reformulations. These reformulations have the advantage that they are representable using the same cones as the original conic GDP. In the case of the hull reformulation, they require no approximation of the perspective function. Moreover, the MICP problems derived can be solved by specialized conic solvers and offer a natural extended formulation amenable to both conic and gradient-based solvers. We present the closed form of several convex functions and their respective perspectives in conic sets, allowing users to formulate their conic GDP problems easily. We finally implement a large set of conic GDP examples and solve them via the scalar nonlinear and conic mixed-integer reformulations. These examples include applications from Process Systems Engineering, Machine learning, and randomly generated instances. Our results show that the conic structure can be exploited to solve these challenging MICP problems more efficiently. Our main contribution is providing the reformulations, examples, and computational results that support the claim that taking advantage of conic formulations of convex GDP instead of their nonlinear algebraic descriptions can lead to a more efficient solution to these problems.</p>","PeriodicalId":55227,"journal":{"name":"Computational Optimization and Applications","volume":null,"pages":null},"PeriodicalIF":2.2,"publicationDate":"2024-02-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139927259","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An inexact regularized proximal Newton method for nonconvex and nonsmooth optimization 用于非凸和非光滑优化的非精确正则化近端牛顿法
IF 2.2 2区 数学
Computational Optimization and Applications Pub Date : 2024-02-20 DOI: 10.1007/s10589-024-00560-0
Ruyu Liu, Shaohua Pan, Yuqia Wu, Xiaoqi Yang
{"title":"An inexact regularized proximal Newton method for nonconvex and nonsmooth optimization","authors":"Ruyu Liu, Shaohua Pan, Yuqia Wu, Xiaoqi Yang","doi":"10.1007/s10589-024-00560-0","DOIUrl":"https://doi.org/10.1007/s10589-024-00560-0","url":null,"abstract":"<p>This paper focuses on the minimization of a sum of a twice continuously differentiable function <i>f</i> and a nonsmooth convex function. An inexact regularized proximal Newton method is proposed by an approximation to the Hessian of <i>f</i> involving the <span>(varrho )</span>th power of the KKT residual. For <span>(varrho =0)</span>, we justify the global convergence of the iterate sequence for the KL objective function and its R-linear convergence rate for the KL objective function of exponent 1/2. For <span>(varrho in (0,1))</span>, by assuming that cluster points satisfy a locally Hölderian error bound of order <i>q</i> on a second-order stationary point set and a local error bound of order <span>(q&gt;1!+!varrho )</span> on the common stationary point set, respectively, we establish the global convergence of the iterate sequence and its superlinear convergence rate with order depending on <i>q</i> and <span>(varrho )</span>. A dual semismooth Newton augmented Lagrangian method is also developed for seeking an inexact minimizer of subproblems. Numerical comparisons with two state-of-the-art methods on <span>(ell _1)</span>-regularized Student’s <i>t</i>-regressions, group penalized Student’s <i>t</i>-regressions, and nonconvex image restoration confirm the efficiency of the proposed method.</p>","PeriodicalId":55227,"journal":{"name":"Computational Optimization and Applications","volume":null,"pages":null},"PeriodicalIF":2.2,"publicationDate":"2024-02-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139927222","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Convex approximations of two-stage risk-averse mixed-integer recourse models 两阶段风险规避混合整数追索模型的凸近似值
IF 2.2 2区 数学
Computational Optimization and Applications Pub Date : 2024-02-13 DOI: 10.1007/s10589-024-00555-x
E. Ruben van Beesten, Ward Romeijnders, Kees Jan Roodbergen
{"title":"Convex approximations of two-stage risk-averse mixed-integer recourse models","authors":"E. Ruben van Beesten, Ward Romeijnders, Kees Jan Roodbergen","doi":"10.1007/s10589-024-00555-x","DOIUrl":"https://doi.org/10.1007/s10589-024-00555-x","url":null,"abstract":"<p>We consider two-stage risk-averse mixed-integer recourse models with law invariant coherent risk measures. As in the risk-neutral case, these models are generally non-convex as a result of the integer restrictions on the second-stage decision variables and hence, hard to solve. To overcome this issue, we propose a convex approximation approach. We derive a performance guarantee for this approximation in the form of an asymptotic error bound, which depends on the choice of risk measure. This error bound, which extends an existing error bound for the conditional value at risk, shows that our approximation method works particularly well if the distribution of the random parameters in the model is highly dispersed. For special cases we derive tighter, non-asymptotic error bounds. Whereas our error bounds are valid only for a continuously distributed second-stage right-hand side vector, practical optimization methods often require discrete distributions. In this context, we show that our error bounds provide statistical error bounds for the corresponding (discretized) sample average approximation (SAA) model. In addition, we construct a Benders’ decomposition algorithm that uses our convex approximations in an SAA-framework and we provide a performance guarantee for the resulting algorithm solution. Finally, we perform numerical experiments which show that for certain risk measures our approach works even better than our theoretical performance guarantees suggest.</p>","PeriodicalId":55227,"journal":{"name":"Computational Optimization and Applications","volume":null,"pages":null},"PeriodicalIF":2.2,"publicationDate":"2024-02-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139753718","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Coordinate descent methods beyond smoothness and separability 超越平滑性和可分性的坐标下降方法
IF 2.2 2区 数学
Computational Optimization and Applications Pub Date : 2024-02-13 DOI: 10.1007/s10589-024-00556-w
Flavia Chorobura, Ion Necoara
{"title":"Coordinate descent methods beyond smoothness and separability","authors":"Flavia Chorobura, Ion Necoara","doi":"10.1007/s10589-024-00556-w","DOIUrl":"https://doi.org/10.1007/s10589-024-00556-w","url":null,"abstract":"<p>This paper deals with convex nonsmooth optimization problems. We introduce a general smooth approximation framework for the original function and apply random (accelerated) coordinate descent methods for minimizing the corresponding smooth approximations. Our framework covers the most important classes of smoothing techniques from the literature. Based on this general framework for the smooth approximation and using coordinate descent type methods we derive convergence rates in function values for the original objective. Moreover, if the original function satisfies a growth condition, then we prove that the smooth approximations also inherits this condition and consequently the convergence rates are improved in this case. We also present a relative randomized coordinate descent algorithm for solving nonseparable minimization problems with the objective function relative smooth along coordinates w.r.t. a (possibly nonseparable) differentiable function. For this algorithm we also derive convergence rates in the convex case and under the growth condition for the objective.</p>","PeriodicalId":55227,"journal":{"name":"Computational Optimization and Applications","volume":null,"pages":null},"PeriodicalIF":2.2,"publicationDate":"2024-02-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139753813","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Accelerated forward–backward algorithms for structured monotone inclusions 结构单调夹杂的加速前向后向算法
IF 2.2 2区 数学
Computational Optimization and Applications Pub Date : 2024-02-11 DOI: 10.1007/s10589-023-00547-3
Paul-Emile Maingé, André Weng-Law
{"title":"Accelerated forward–backward algorithms for structured monotone inclusions","authors":"Paul-Emile Maingé, André Weng-Law","doi":"10.1007/s10589-023-00547-3","DOIUrl":"https://doi.org/10.1007/s10589-023-00547-3","url":null,"abstract":"<p>In this paper, we develop rapidly convergent forward–backward algorithms for computing zeroes of the sum of two maximally monotone operators. A modification of the classical forward–backward method is considered, by incorporating an inertial term (closed to the acceleration techniques introduced by Nesterov), a constant relaxation factor and a correction term, along with a preconditioning process. In a Hilbert space setting, we prove the weak convergence to equilibria of the iterates <span>((x_n))</span>, with worst-case rates of <span>( o(n^{-1}))</span> in terms of both the discrete velocity and the fixed point residual, instead of the rates of <span>(mathcal {O}(n^{-1/2}))</span> classically established for related algorithms. Our procedure can be also adapted to more general monotone inclusions. In particular, we propose a fast primal-dual algorithmic solution to some class of convex-concave saddle point problems. In addition, we provide a well-adapted framework for solving this class of problems by means of standard proximal-like algorithms dedicated to structured monotone inclusions. Numerical experiments are also performed so as to enlighten the efficiency of the proposed strategy.</p>","PeriodicalId":55227,"journal":{"name":"Computational Optimization and Applications","volume":null,"pages":null},"PeriodicalIF":2.2,"publicationDate":"2024-02-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139754018","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信