{"title":"The complexity of high-order predictor-corrector methods for solving sufficient linear complementarity problems","authors":"J. Stoer, Martin Wechs","doi":"10.1080/10556789808805721","DOIUrl":"https://doi.org/10.1080/10556789808805721","url":null,"abstract":"Recently the authors of this paper and S. Mizuno described a class of infeasible-interiorpoint methods for solving linear complementarity problems that are sufficient in the sense of R.W. Cottle, J.-S. Pang and V. Venkateswaran (1989) Sufficient matrices and the linear complementarity problemLinear Algebra AppL 114/115,231-249. It was shown that these methods converge superlinearly with an arbitrarily high order even for degenerate problems or problems without strictly complementary solution. In this paper the complexity of these methods is investigated. It is shown that all these methods, if started appropriately, need predictor-corrector steps to find an e-solution, and only steps, if the problem has strictly interior points. HereK is the sufficiency parameter of the complementarity problem.","PeriodicalId":54673,"journal":{"name":"Optimization Methods & Software","volume":"59 1","pages":"393-417"},"PeriodicalIF":2.2,"publicationDate":"1998-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84844864","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"On a lagrange — Newton method for a nonlinear parabolic boundary control problem ∗","authors":"H. Goldberg, F. Tröltzscht","doi":"10.1080/10556789808805678","DOIUrl":"https://doi.org/10.1080/10556789808805678","url":null,"abstract":"An optimal control problem governed by the heat equation with nonlinear boundary conditions is considered. The objective functional consists of a quadratic terminal part aifid a quadratic regularization term. On transforming the associated optimality system to! a generalized equation, an SQP method for solving the optimal control problem is related to the Newton method for the generalized equation. In this way, the convergence of tfie SQP method is shown by proving the strong regularity of the optimality system. Aftjer explaining the numerical implementation of the theoretical results some high precision test examples are presented","PeriodicalId":54673,"journal":{"name":"Optimization Methods & Software","volume":"198 1","pages":"225-247"},"PeriodicalIF":2.2,"publicationDate":"1998-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83463935","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A simple algebraic proof of Farkas's lemma and related theorems","authors":"C. G. Broyden","doi":"10.1080/10556789808805676","DOIUrl":"https://doi.org/10.1080/10556789808805676","url":null,"abstract":"A proof is given of Farkas's lemma based on a new theorem pertaining to orthogodal matrices. It is claimed that this theorem is slightly more general than Tucker's theorem, which concerns skew-symmetric matrices and which may itself be derived simply from tne new theorem. Farkas's lemma and other theorems of the alternative then follow trivially from Tucker's theorem","PeriodicalId":54673,"journal":{"name":"Optimization Methods & Software","volume":"107 5 1","pages":"185-199"},"PeriodicalIF":2.2,"publicationDate":"1998-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89743214","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"On free variables in interior point methods","authors":"C. Mészáros","doi":"10.1080/10556789808805689","DOIUrl":"https://doi.org/10.1080/10556789808805689","url":null,"abstract":"Interior point methods, especially the algorithms for linear programming problems are sensitive if there are unconstrained (free) variables in the problem. While replacing a free variable by two nonnegative ones may cause numerical instabilities, the implicit handling results in a semidefinite scaling matrix at each interior point iteration. In the paper we investigate the effects if the scaling matrix is regularized. Our analysis will prove that the effect of the regularization can be easily monitored and corrected if necessary. We describe the regularization scheme mainly for the efficient handling of free variables, but a similar analysis can be made for the case, when the small scaling factors are raised to larger values to improve the numerical stability of the systems that define the searcn direction. We will show the superiority of our approach over the variable replacement method on a set of test problems arising from water management application","PeriodicalId":54673,"journal":{"name":"Optimization Methods & Software","volume":"5 1","pages":"121-139"},"PeriodicalIF":2.2,"publicationDate":"1998-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89479856","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
J. Eriksson, M. Gulliksson, Per Lindström, P. Wedin
{"title":"Regularization tools for training large feed-forward neural networks using automatic differentiation ∗","authors":"J. Eriksson, M. Gulliksson, Per Lindström, P. Wedin","doi":"10.1080/10556789808805701","DOIUrl":"https://doi.org/10.1080/10556789808805701","url":null,"abstract":"We describe regularization tools for training large-scale artificial feed-forward neural networks. We propose algorithms that explicitly use a sequence of Tikhonov regularized nonlinear least squar ...","PeriodicalId":54673,"journal":{"name":"Optimization Methods & Software","volume":"1 1","pages":"49-69"},"PeriodicalIF":2.2,"publicationDate":"1998-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85573825","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A multiplier adjustment technique for the capacitated concentrator location problem","authors":"M. Celani, R. Cerulli, M. Gaudioso, Y. Sergeyev","doi":"10.1080/10556789808805703","DOIUrl":"https://doi.org/10.1080/10556789808805703","url":null,"abstract":"We describe a new dual descent method for a pure 0— location problem known as the capacitated concentrator location problem. The multiplier adjustment technique presented is aimed to find an upper bound in a Lagrangean relaxation context permitting both to decrease and to increase multipliers in the course of the search in contrast with methods where that ones are monotonically updated.","PeriodicalId":54673,"journal":{"name":"Optimization Methods & Software","volume":"15 1","pages":"87-102"},"PeriodicalIF":2.2,"publicationDate":"1998-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73470597","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Semidefinite relaxation and nonconvex quadratic optimization","authors":"Y. Nesterov","doi":"10.1080/10556789808805690","DOIUrl":"https://doi.org/10.1080/10556789808805690","url":null,"abstract":"In this paper we consider the semidefinite relaxation of some global optimization problems. We prove that in some cases this relaxation provides us with a constant relative accuracy estimate for the exact solution.","PeriodicalId":54673,"journal":{"name":"Optimization Methods & Software","volume":"37 1","pages":"141-160"},"PeriodicalIF":2.2,"publicationDate":"1998-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78353392","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A new nonlinear ABS-type algorithm and its efficiency analysis ∗","authors":"N. Deng, Z. Chen","doi":"10.1080/10556789808805702","DOIUrl":"https://doi.org/10.1080/10556789808805702","url":null,"abstract":"As a continuation work following [4] and [5], a new ABS-type algorithm for a nonlinear system of equations is proposed. A major iteration of this algorithm requires n component evaluations and only one gradient evaluation. We prove that the algorithm is superlinearly convergent with R-order at least τ n , where τ n is the unique positive root of τn −τn−1 −1=0. It is shown that the new algorithm is usually more efficient than the methods of Newton, Brown and Brent, and the ABS-type algorithms in [1], [4] and [5], in the sense of some standard efficiency measure.","PeriodicalId":54673,"journal":{"name":"Optimization Methods & Software","volume":"32 1","pages":"71-85"},"PeriodicalIF":2.2,"publicationDate":"1998-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/10556789808805702","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72526760","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A globally convergent primal-dual interior point method for constrained optimization","authors":"Hiroshi Yamashita","doi":"10.1080/10556789808805723","DOIUrl":"https://doi.org/10.1080/10556789808805723","url":null,"abstract":"This paper proposes a primal-dual interior point method for solving general nonlinearly constrained optimization problems. The method is based on solving the Barrier Karush-Kuhn-Tucker conditions for optimality by the Newton method. To globalize the iteration we introduce the Barrier-penalty function and the optimality condition for minimizing this function. Our basic iteration is the Newton iteration for solving the optimality conditions with respect to the Barrier-penalty function which coincides with the Newton iteration for the Barrier Karush-Kuhn-Tucker conditions if the penalty parameter is sufficiently large. It is proved that the method is globally convergent from an arbitrary initial point that strictly satisfies the bounds on the variables. Implementations of the given algorithm are done for small dense nonlinear programs. The method solves all the problems in Hock and Schittkowski's textbook efficiently. Thus it is shown that the method given in this paper possesses a good theoretical convergen...","PeriodicalId":54673,"journal":{"name":"Optimization Methods & Software","volume":"8 1","pages":"443-469"},"PeriodicalIF":2.2,"publicationDate":"1998-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89844984","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Computational experience with globally convergent descent methods for large sparse systems of nonlinear equations","authors":"L. Luksan, J. Vlček","doi":"10.1080/10556789808805677","DOIUrl":"https://doi.org/10.1080/10556789808805677","url":null,"abstract":"This paper is devoted to globally convergent Armijo-type descent methods for solving large sparse systems of nonlinear equations. These methods include the discrete Newtcin method and a broad class of Newton-like methods based on various approximations of the Jacobian matrix. We propose a general theory of global convergence together with a robust algorithm including a special restarting strategy. This algorithm is based cfn the preconditioned smoothed CGS method for solving nonsymmetric systems of linejtr equations. After reviewing 12 particular Newton-like methods, we propose results of extensive computational experiments. These results demonstrate high efficiency of tip proposed algorithm","PeriodicalId":54673,"journal":{"name":"Optimization Methods & Software","volume":"19 1","pages":"201-223"},"PeriodicalIF":2.2,"publicationDate":"1998-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84679386","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}