{"title":"Riemannian preconditioned algorithms for tensor completion via tensor ring decomposition","authors":"Bin Gao, Renfeng Peng, Ya-xiang Yuan","doi":"10.1007/s10589-024-00559-7","DOIUrl":"https://doi.org/10.1007/s10589-024-00559-7","url":null,"abstract":"<p>We propose Riemannian preconditioned algorithms for the tensor completion problem via tensor ring decomposition. A new Riemannian metric is developed on the product space of the mode-2 unfolding matrices of the core tensors in tensor ring decomposition. The construction of this metric aims to approximate the Hessian of the cost function by its diagonal blocks, paving the way for various Riemannian optimization methods. Specifically, we propose the Riemannian gradient descent and Riemannian conjugate gradient algorithms. We prove that both algorithms globally converge to a stationary point. In the implementation, we exploit the tensor structure and adopt an economical procedure to avoid large matrix formulation and computation in gradients, which significantly reduces the computational cost. Numerical experiments on various synthetic and real-world datasets—movie ratings, hyperspectral images, and high-dimensional functions—suggest that the proposed algorithms have better or favorably comparable performance to other candidates.</p>","PeriodicalId":55227,"journal":{"name":"Computational Optimization and Applications","volume":null,"pages":null},"PeriodicalIF":2.2,"publicationDate":"2024-02-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139979554","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Christoph Hansknecht, Christian Kirches, Paul Manns
{"title":"Convergence of successive linear programming algorithms for noisy functions","authors":"Christoph Hansknecht, Christian Kirches, Paul Manns","doi":"10.1007/s10589-024-00564-w","DOIUrl":"https://doi.org/10.1007/s10589-024-00564-w","url":null,"abstract":"<p>Gradient-based methods have been highly successful for solving a variety of both unconstrained and constrained nonlinear optimization problems. In real-world applications, such as optimal control or machine learning, the necessary function and derivative information may be corrupted by noise, however. Sun and Nocedal have recently proposed a remedy for smooth unconstrained problems by means of a stabilization of the acceptance criterion for computed iterates, which leads to convergence of the iterates of a trust-region method to a region of criticality (Sun and Nocedal in Math Program 66:1–28, 2023. https://doi.org/10.1007/s10107-023-01941-9). We extend their analysis to the successive linear programming algorithm (Byrd et al. in Math Program 100(1):27–48, 2003. https://doi.org/10.1007/s10107-003-0485-4, SIAM J Optim 16(2):471–489, 2005. https://doi.org/10.1137/S1052623403426532) for unconstrained optimization problems with objectives that can be characterized as the composition of a polyhedral function with a smooth function, where the latter and its gradient may be corrupted by noise. This gives the flexibility to cover, for example, (sub)problems arising in image reconstruction or constrained optimization algorithms. We provide computational examples that illustrate the findings and point to possible strategies for practical determination of the stabilization parameter that balances the size of the critical region with a relaxation of the acceptance criterion (or descent property) of the algorithm.</p>","PeriodicalId":55227,"journal":{"name":"Computational Optimization and Applications","volume":null,"pages":null},"PeriodicalIF":2.2,"publicationDate":"2024-02-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139969741","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"IPRSDP: a primal-dual interior-point relaxation algorithm for semidefinite programming","authors":"Rui-Jin Zhang, Xin-Wei Liu, Yu-Hong Dai","doi":"10.1007/s10589-024-00558-8","DOIUrl":"https://doi.org/10.1007/s10589-024-00558-8","url":null,"abstract":"<p>We propose an efficient primal-dual interior-point relaxation algorithm based on a smoothing barrier augmented Lagrangian, called IPRSDP, for solving semidefinite programming problems in this paper. The IPRSDP algorithm has three advantages over classical interior-point methods. Firstly, IPRSDP does not require the iterative points to be positive definite. Consequently, it can easily be combined with the warm-start technique used for solving many combinatorial optimization problems, which require the solutions of a series of semidefinite programming problems. Secondly, the search direction of IPRSDP is symmetric in itself, and hence the symmetrization procedure is not required any more. Thirdly, with the introduction of the smoothing barrier augmented Lagrangian function, IPRSDP can provide the explicit form of the Schur complement matrix. This enables the complexity of forming this matrix in IPRSDP to be comparable to or lower than that of many existing search directions. The global convergence of IPRSDP is established under suitable assumptions. Numerical experiments are made on the SDPLIB set, which demonstrate the efficiency of IPRSDP.</p>","PeriodicalId":55227,"journal":{"name":"Computational Optimization and Applications","volume":null,"pages":null},"PeriodicalIF":2.2,"publicationDate":"2024-02-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139927219","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A projected-search interior-point method for nonlinearly constrained optimization","authors":"Philip E. Gill, Minxin Zhang","doi":"10.1007/s10589-023-00549-1","DOIUrl":"https://doi.org/10.1007/s10589-023-00549-1","url":null,"abstract":"<p>This paper concerns the formulation and analysis of a new interior-point method for constrained optimization that combines a shifted primal-dual interior-point method with a projected-search method for bound-constrained optimization. The method involves the computation of an approximate Newton direction for a primal-dual penalty-barrier function that incorporates shifts on both the primal and dual variables. Shifts on the dual variables allow the method to be safely “warm started” from a good approximate solution and avoids the possibility of very large solutions of the associated path-following equations. The approximate Newton direction is used in conjunction with a new projected-search line-search algorithm that employs a flexible non-monotone quasi-Armijo line search for the minimization of each penalty-barrier function. Numerical results are presented for a large set of constrained optimization problems. For comparison purposes, results are also given for two primal-dual interior-point methods that do not use projection. The first is a method that shifts both the primal and dual variables. The second is a method that involves shifts on the primal variables only. The results show that the use of both primal and dual shifts in conjunction with projection gives a method that is more robust and requires significantly fewer iterations. In particular, the number of times that the search direction must be computed is substantially reduced. Results from a set of quadratic programming test problems indicate that the method is particularly well-suited to solving the quadratic programming subproblem in a sequential quadratic programming method for nonlinear optimization.</p>","PeriodicalId":55227,"journal":{"name":"Computational Optimization and Applications","volume":null,"pages":null},"PeriodicalIF":2.2,"publicationDate":"2024-02-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139927309","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An infeasible interior-point arc-search method with Nesterov’s restarting strategy for linear programming problems","authors":"Einosuke Iida, Makoto Yamashita","doi":"10.1007/s10589-024-00561-z","DOIUrl":"https://doi.org/10.1007/s10589-024-00561-z","url":null,"abstract":"<p>An arc-search interior-point method is a type of interior-point method that approximates the central path by an ellipsoidal arc, and it can often reduce the number of iterations. In this work, to further reduce the number of iterations and the computation time for solving linear programming problems, we propose two arc-search interior-point methods using Nesterov’s restarting strategy which is a well-known method to accelerate the gradient method with a momentum term. The first one generates a sequence of iterations in the neighborhood, and we prove that the proposed method converges to an optimal solution and that it is a polynomial-time method. The second one incorporates the concept of the Mehrotra-type interior-point method to improve numerical performance. The numerical experiments demonstrate that the second one reduced the number of iterations and the computational time compared to existing interior-point methods due to the momentum term.\u0000</p>","PeriodicalId":55227,"journal":{"name":"Computational Optimization and Applications","volume":null,"pages":null},"PeriodicalIF":2.2,"publicationDate":"2024-02-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139927251","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Convex mixed-integer nonlinear programs derived from generalized disjunctive programming using cones","authors":"David E. Bernal Neira, Ignacio E. Grossmann","doi":"10.1007/s10589-024-00557-9","DOIUrl":"https://doi.org/10.1007/s10589-024-00557-9","url":null,"abstract":"<p>We propose the formulation of convex Generalized Disjunctive Programming (GDP) problems using conic inequalities leading to conic GDP problems. We then show the reformulation of conic GDPs into Mixed-Integer Conic Programming (MICP) problems through both the big-M and hull reformulations. These reformulations have the advantage that they are representable using the same cones as the original conic GDP. In the case of the hull reformulation, they require no approximation of the perspective function. Moreover, the MICP problems derived can be solved by specialized conic solvers and offer a natural extended formulation amenable to both conic and gradient-based solvers. We present the closed form of several convex functions and their respective perspectives in conic sets, allowing users to formulate their conic GDP problems easily. We finally implement a large set of conic GDP examples and solve them via the scalar nonlinear and conic mixed-integer reformulations. These examples include applications from Process Systems Engineering, Machine learning, and randomly generated instances. Our results show that the conic structure can be exploited to solve these challenging MICP problems more efficiently. Our main contribution is providing the reformulations, examples, and computational results that support the claim that taking advantage of conic formulations of convex GDP instead of their nonlinear algebraic descriptions can lead to a more efficient solution to these problems.</p>","PeriodicalId":55227,"journal":{"name":"Computational Optimization and Applications","volume":null,"pages":null},"PeriodicalIF":2.2,"publicationDate":"2024-02-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139927259","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An inexact regularized proximal Newton method for nonconvex and nonsmooth optimization","authors":"Ruyu Liu, Shaohua Pan, Yuqia Wu, Xiaoqi Yang","doi":"10.1007/s10589-024-00560-0","DOIUrl":"https://doi.org/10.1007/s10589-024-00560-0","url":null,"abstract":"<p>This paper focuses on the minimization of a sum of a twice continuously differentiable function <i>f</i> and a nonsmooth convex function. An inexact regularized proximal Newton method is proposed by an approximation to the Hessian of <i>f</i> involving the <span>(varrho )</span>th power of the KKT residual. For <span>(varrho =0)</span>, we justify the global convergence of the iterate sequence for the KL objective function and its R-linear convergence rate for the KL objective function of exponent 1/2. For <span>(varrho in (0,1))</span>, by assuming that cluster points satisfy a locally Hölderian error bound of order <i>q</i> on a second-order stationary point set and a local error bound of order <span>(q>1!+!varrho )</span> on the common stationary point set, respectively, we establish the global convergence of the iterate sequence and its superlinear convergence rate with order depending on <i>q</i> and <span>(varrho )</span>. A dual semismooth Newton augmented Lagrangian method is also developed for seeking an inexact minimizer of subproblems. Numerical comparisons with two state-of-the-art methods on <span>(ell _1)</span>-regularized Student’s <i>t</i>-regressions, group penalized Student’s <i>t</i>-regressions, and nonconvex image restoration confirm the efficiency of the proposed method.</p>","PeriodicalId":55227,"journal":{"name":"Computational Optimization and Applications","volume":null,"pages":null},"PeriodicalIF":2.2,"publicationDate":"2024-02-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139927222","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
E. Ruben van Beesten, Ward Romeijnders, Kees Jan Roodbergen
{"title":"Convex approximations of two-stage risk-averse mixed-integer recourse models","authors":"E. Ruben van Beesten, Ward Romeijnders, Kees Jan Roodbergen","doi":"10.1007/s10589-024-00555-x","DOIUrl":"https://doi.org/10.1007/s10589-024-00555-x","url":null,"abstract":"<p>We consider two-stage risk-averse mixed-integer recourse models with law invariant coherent risk measures. As in the risk-neutral case, these models are generally non-convex as a result of the integer restrictions on the second-stage decision variables and hence, hard to solve. To overcome this issue, we propose a convex approximation approach. We derive a performance guarantee for this approximation in the form of an asymptotic error bound, which depends on the choice of risk measure. This error bound, which extends an existing error bound for the conditional value at risk, shows that our approximation method works particularly well if the distribution of the random parameters in the model is highly dispersed. For special cases we derive tighter, non-asymptotic error bounds. Whereas our error bounds are valid only for a continuously distributed second-stage right-hand side vector, practical optimization methods often require discrete distributions. In this context, we show that our error bounds provide statistical error bounds for the corresponding (discretized) sample average approximation (SAA) model. In addition, we construct a Benders’ decomposition algorithm that uses our convex approximations in an SAA-framework and we provide a performance guarantee for the resulting algorithm solution. Finally, we perform numerical experiments which show that for certain risk measures our approach works even better than our theoretical performance guarantees suggest.</p>","PeriodicalId":55227,"journal":{"name":"Computational Optimization and Applications","volume":null,"pages":null},"PeriodicalIF":2.2,"publicationDate":"2024-02-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139753718","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Coordinate descent methods beyond smoothness and separability","authors":"Flavia Chorobura, Ion Necoara","doi":"10.1007/s10589-024-00556-w","DOIUrl":"https://doi.org/10.1007/s10589-024-00556-w","url":null,"abstract":"<p>This paper deals with convex nonsmooth optimization problems. We introduce a general smooth approximation framework for the original function and apply random (accelerated) coordinate descent methods for minimizing the corresponding smooth approximations. Our framework covers the most important classes of smoothing techniques from the literature. Based on this general framework for the smooth approximation and using coordinate descent type methods we derive convergence rates in function values for the original objective. Moreover, if the original function satisfies a growth condition, then we prove that the smooth approximations also inherits this condition and consequently the convergence rates are improved in this case. We also present a relative randomized coordinate descent algorithm for solving nonseparable minimization problems with the objective function relative smooth along coordinates w.r.t. a (possibly nonseparable) differentiable function. For this algorithm we also derive convergence rates in the convex case and under the growth condition for the objective.</p>","PeriodicalId":55227,"journal":{"name":"Computational Optimization and Applications","volume":null,"pages":null},"PeriodicalIF":2.2,"publicationDate":"2024-02-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139753813","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Accelerated forward–backward algorithms for structured monotone inclusions","authors":"Paul-Emile Maingé, André Weng-Law","doi":"10.1007/s10589-023-00547-3","DOIUrl":"https://doi.org/10.1007/s10589-023-00547-3","url":null,"abstract":"<p>In this paper, we develop rapidly convergent forward–backward algorithms for computing zeroes of the sum of two maximally monotone operators. A modification of the classical forward–backward method is considered, by incorporating an inertial term (closed to the acceleration techniques introduced by Nesterov), a constant relaxation factor and a correction term, along with a preconditioning process. In a Hilbert space setting, we prove the weak convergence to equilibria of the iterates <span>((x_n))</span>, with worst-case rates of <span>( o(n^{-1}))</span> in terms of both the discrete velocity and the fixed point residual, instead of the rates of <span>(mathcal {O}(n^{-1/2}))</span> classically established for related algorithms. Our procedure can be also adapted to more general monotone inclusions. In particular, we propose a fast primal-dual algorithmic solution to some class of convex-concave saddle point problems. In addition, we provide a well-adapted framework for solving this class of problems by means of standard proximal-like algorithms dedicated to structured monotone inclusions. Numerical experiments are also performed so as to enlighten the efficiency of the proposed strategy.</p>","PeriodicalId":55227,"journal":{"name":"Computational Optimization and Applications","volume":null,"pages":null},"PeriodicalIF":2.2,"publicationDate":"2024-02-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139754018","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}