{"title":"Steering exact penalty DCA for nonsmooth DC optimisation problems with equality and inequality constraints","authors":"M. V. Dolgopolik","doi":"10.1080/10556788.2023.2167992","DOIUrl":"https://doi.org/10.1080/10556788.2023.2167992","url":null,"abstract":"We propose and study a version of the DCA (Difference-of-Convex functions Algorithm) using the penalty function for solving nonsmooth DC optimisation problems with nonsmooth DC equality and inequality constraints. The method employs an adaptive penalty updating strategy to improve its performance. This strategy is based on the so-called steering exact penalty methodology and relies on solving some auxiliary convex subproblems to determine a suitable value of the penalty parameter. We present a detailed convergence analysis of the method and illustrate its practical performance by applying the method to two nonsmooth discrete optimal control problem.","PeriodicalId":124811,"journal":{"name":"Optimization Methods and Software","volume":"72 4","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"113954717","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
V. Leplat, Y. Nesterov, Nicolas Gillis, F. Glineur
{"title":"Conic optimization-based algorithms for nonnegative matrix factorization","authors":"V. Leplat, Y. Nesterov, Nicolas Gillis, F. Glineur","doi":"10.1080/10556788.2023.2189714","DOIUrl":"https://doi.org/10.1080/10556788.2023.2189714","url":null,"abstract":"Nonnegative matrix factorization is the following problem: given a nonnegative input matrix V and a factorization rank K, compute two nonnegative matrices, W with K columns and H with K rows, such that WH approximates V as well as possible. In this paper, we propose two new approaches for computing high-quality NMF solutions using conic optimization. These approaches rely on the same two steps. First, we reformulate NMF as minimizing a concave function over a product of convex cones – one approach is based on the exponential cone and the other on the second-order cone. Then, we solve these reformulations iteratively: at each step, we minimize exactly, over the feasible set, a majorization of the objective functions obtained via linearization at the current iterate. Hence these subproblems are convex conic programs and can be solved efficiently using dedicated algorithms. We prove that our approaches reach a stationary point with an accuracy decreasing as , where i denotes the iteration number. To the best of our knowledge, our analysis is the first to provide a convergence rate to stationary points for NMF. Furthermore, in the particular cases of rank-1 factorizations (i.e. K = 1), we show that one of our formulations can be expressed as a convex optimization problem, implying that optimal rank-1 approximations can be computed efficiently. Finally, we show on several numerical examples that our approaches are able to frequently compute exact NMFs (i.e. with V = WH) and compete favourably with the state of the art.","PeriodicalId":124811,"journal":{"name":"Optimization Methods and Software","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115748319","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An incremental descent method for multi-objective optimization","authors":"I. F. D. Oliveira, R. Takahashi","doi":"10.1080/10556788.2022.2124989","DOIUrl":"https://doi.org/10.1080/10556788.2022.2124989","url":null,"abstract":"ABSTRACT Multi-objective steepest descent, under the assumption of lower-bounded objective functions with L-Lipschitz continuous gradients, requires gradient and function computations to produce a measure of proximity to critical conditions akin to in the single-objective setting, where m is the number of objectives considered. We reduce this to with a multi-objective incremental approach that has a computational cost that does not grow with the number of objective functions m.","PeriodicalId":124811,"journal":{"name":"Optimization Methods and Software","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126918810","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A majorization penalty method for SVM with sparse constraint","authors":"Si-Tong Lu, Qingna Li","doi":"10.1080/10556788.2022.2142584","DOIUrl":"https://doi.org/10.1080/10556788.2022.2142584","url":null,"abstract":"Support vector machine (SVM) is an important and fundamental technique in machine learning. Soft-margin SVM models have stronger generalization performance compared with the hard-margin SVM. Most existing works use the hinge-loss function which can be regarded as an upper bound of the 0–1 loss function. However, it cannot explicitly control the number of misclassified samples. In this paper, we use the idea of soft-margin SVM and propose a new SVM model with a sparse constraint. Our model can strictly limit the number of misclassified samples, expressing the soft-margin constraint as a sparse constraint. By constructing a majorization function, a majorization penalty method can be used to solve the sparse-constrained optimization problem. We apply Conjugate-Gradient (CG) method to solve the resulting subproblem. Extensive numerical results demonstrate the impressive performance of the proposed majorization penalty method.","PeriodicalId":124811,"journal":{"name":"Optimization Methods and Software","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-05-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131229984","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Reflected three-operator splitting method for monotone inclusion problem","authors":"O. Iyiola, C. Enyi, Y. Shehu","doi":"10.1080/10556788.2021.1924715","DOIUrl":"https://doi.org/10.1080/10556788.2021.1924715","url":null,"abstract":"In this paper, we consider reflected three-operator splitting methods for monotone inclusion problems in real Hilbert spaces. To do this, we first obtain weak convergence analysis and nonasymptotic convergence rate of the reflected Krasnosel'skiĭ-Mann iteration for finding a fixed point of nonexpansive mapping in real Hilbert spaces under some seemingly easy to implement conditions on the iterative parameters. We then apply our results to three-operator splitting for the monotone inclusion problem and consequently obtain the corresponding convergence analysis. Furthermore, we derive reflected primal-dual algorithms for highly structured monotone inclusion problems. Some numerical implementations are drawn from splitting methods to support the theoretical analysis.","PeriodicalId":124811,"journal":{"name":"Optimization Methods and Software","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-05-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124782867","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Operator splitting for adaptive radiation therapy with nonlinear health dynamics","authors":"A. Fu, L. Xing, Stephen P. Boyd","doi":"10.1080/10556788.2022.2078824","DOIUrl":"https://doi.org/10.1080/10556788.2022.2078824","url":null,"abstract":"ABSTRACT We present an optimization-based approach to radiation treatment planning over time. Our approach formulates treatment planning as an optimal control problem with nonlinear patient health dynamics derived from the standard linear-quadratic cell survival model. As the formulation is nonconvex, we propose a method for obtaining an approximate solution by solving a sequence of convex optimization problems. This method is fast, efficient, and robust to model error, adapting readily to changes in the patient's health between treatment sessions. Moreover, we show that it can be combined with the operator splitting method ADMM to produce an algorithm that is highly scalable and can handle large clinical cases. We introduce an open-source Python implementation of our algorithm, AdaRad, and demonstrate its performance on several examples.","PeriodicalId":124811,"journal":{"name":"Optimization Methods and Software","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123129233","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An inertial subgradient extragradient algorithm with adaptive stepsizes for variational inequality problems","authors":"Xiaokai Chang, Sanyang Liu, Zhao Deng, Suoping Li","doi":"10.1080/10556788.2021.1910946","DOIUrl":"https://doi.org/10.1080/10556788.2021.1910946","url":null,"abstract":"In this paper, we introduce an efficient subgradient extragradient (SE) based method for solving variational inequality problems with monotone operator in Hilbert space. In many existing SE methods, two values of operator are needed over each iteration and the Lipschitz constant of the operator or linesearch is required for estimating step sizes, which are usually not practical and expensive. To overcome these drawbacks, we present an inertial SE based algorithm with adaptive step sizes, estimated by using an approximation of the local Lipschitz constant without running a linesearch. Each iteration of the method only requires a projection on the feasible set and a value of the operator. The numerical experiments illustrate the efficiency of the proposed algorithm.","PeriodicalId":124811,"journal":{"name":"Optimization Methods and Software","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131098613","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A competitive inexact nonmonotone filter SQP method: convergence analysis and numerical results","authors":"Hani Ahmadzadeh, N. Mahdavi-Amiri","doi":"10.1080/10556788.2021.1913155","DOIUrl":"https://doi.org/10.1080/10556788.2021.1913155","url":null,"abstract":"We propose an inexact nonmonotone successive quadratic programming (SQP) algorithm for solving nonlinear programming problems with equality constraints and bounded variables. Regarding the value of the current feasibility violation and the minimum value of its linear approximation over a trust region, several scenarios are envisaged. In one scenario, a possible infeasible stationary point is detected. In other scenarios, the search direction is computed using an inexact (truncated) solution of a feasible strictly convex quadratic program (QP). The search direction is shown to be a descent direction for the objective function or the feasibility violation in the feasible or infeasible iterations, respectively. A new penalty parameter updating formula is proposed to turn the search direction into a descent direction for an -penalty function. In certain iterations, an accelerator direction is developed to obtain a superlinear local convergence rate of the algorithm. Using a nonmonotone filter strategy, the global convergence of the algorithm and a superlinear local rate of convergence are guaranteed. The main advantage of the algorithm is that the global convergence of the algorithm is established using inexact solutions of the QPs. Furthermore, the use of inexact solutions instead of exact solutions of the subproblems enhances the robustness and efficiency of the algorithm. The algorithm is implemented using MATLAB and the program is tested on a wide range of test problems from the CUTEst library. Comparison of the obtained numerical results with those obtained by testing some similar SQP algorithms affirms the efficiency and robustness of the proposed algorithm.","PeriodicalId":124811,"journal":{"name":"Optimization Methods and Software","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115218093","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A globally convergent regularized interior point method for constrained optimization","authors":"Songqiang Qiu","doi":"10.1080/10556788.2021.1908283","DOIUrl":"https://doi.org/10.1080/10556788.2021.1908283","url":null,"abstract":"This paper proposes a globally convergent regularized interior point method that involves a specifically designed regularization strategy for constrained optimization. The main concept of the proposed algorithm is that when a proper regularization parameter is selected, the direction obtained from the regularized barrier equation is a descent direction for either the objective function or constraint violation. Accordingly, by embedding a flexible strategy of choosing a regularization parameter in a trust-funnel-like interior point scheme, we propose the new algorithm. Global convergence under the mild assumptions of relaxed constant rank constraint qualification (RCRCQ) and local consistency of the linearized active and equality constraints is shown. Preliminary numerical experiments are conducted, and the results are encouraging.","PeriodicalId":124811,"journal":{"name":"Optimization Methods and Software","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130244307","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ludwig Kampel, Michael Wagner, I. Kotsireas, D. Simos
{"title":"A primer on the application of neural networks to covering array generation","authors":"Ludwig Kampel, Michael Wagner, I. Kotsireas, D. Simos","doi":"10.1080/10556788.2021.1907384","DOIUrl":"https://doi.org/10.1080/10556788.2021.1907384","url":null,"abstract":"In the past, combinatorial structures have been used only to tune parameters of neural networks. In this work, we employ neural networks in the form of Boltzmann machines and Hopfield networks for the construction of a specific class of combinatorial designs, namely covering arrays (CAs). In past works, these neural networks were successfully used to solve set cover instances. For the construction of CAs, we consider the corresponding set cover instances and use neural networks to solve such instances. We adapt existing algorithms for solving general set cover instances, which are based on Boltzmann machines and Hopfield networks and apply them for CA construction. Furthermore, for the algorithm based on Boltzmann machines, we consider newly designed versions, where we deploy structural changes of the underlying Boltzmann machine, adding a feedback loop. Additionally, one variant of this algorithm employs learning techniques based on neural networks to adjust the various connections encountered in the graph representing the considered set cover instances. Culminating in a comprehensive experimental evaluation, our work presents the first study of applications of neural networks in the field of covering array generation and related discrete structures and may act as a guideline for future investigations.","PeriodicalId":124811,"journal":{"name":"Optimization Methods and Software","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126629151","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}