{"title":"混合协调优化方法","authors":"Jianxin Tang, P. Luh, T. Chang","doi":"10.23919/ACC.1988.4790021","DOIUrl":null,"url":null,"abstract":"This paper studies static optimization with equality constraints by using the mixed coordination method. The idea is to relax equality constraints via Lagrange multipliers, and creat a hierarchy where the Lagrange multipliers and part of the decision variables are selected as high level variables. The method was proposed about ten years ago with a simple high level updating scheme. In this paper we show that this simple updating scheme has a linear convergence rate under appropriate conditions. To obtain faster convergence, the Modified Newton's Method is adopted at the high level. There are two difficulties associated with the Modified Newton's Method. One is how to obtain the Hessian matrix in determining the Newton direction, as second order derivatives of the objective function with respect to all high level variables are needed. The second is when to stop in performing a line search along the Newton direction, as the high level problem is a maxmini problem looking for a saddle point. In this paper, the Hessian matrix is obtained by using a kind of sensitivity analysis. The line search stopping criterion, on the other hand, is based on the norm of the gradient vector. Extensive numerical testing results are provided in the paper. Since the low level is a set of independent subproblems, the method is well suited for parallel processing. Furthermore, since convexification terms can be added while maintaining separability of the original problem, the method is promising for nonconvex problems.","PeriodicalId":6395,"journal":{"name":"1988 American Control Conference","volume":"52 1","pages":"1811-1816"},"PeriodicalIF":0.0000,"publicationDate":"1988-06-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":"{\"title\":\"Optimization with the Mixed Coordination Method\",\"authors\":\"Jianxin Tang, P. Luh, T. Chang\",\"doi\":\"10.23919/ACC.1988.4790021\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"This paper studies static optimization with equality constraints by using the mixed coordination method. The idea is to relax equality constraints via Lagrange multipliers, and creat a hierarchy where the Lagrange multipliers and part of the decision variables are selected as high level variables. The method was proposed about ten years ago with a simple high level updating scheme. In this paper we show that this simple updating scheme has a linear convergence rate under appropriate conditions. To obtain faster convergence, the Modified Newton's Method is adopted at the high level. There are two difficulties associated with the Modified Newton's Method. One is how to obtain the Hessian matrix in determining the Newton direction, as second order derivatives of the objective function with respect to all high level variables are needed. The second is when to stop in performing a line search along the Newton direction, as the high level problem is a maxmini problem looking for a saddle point. In this paper, the Hessian matrix is obtained by using a kind of sensitivity analysis. The line search stopping criterion, on the other hand, is based on the norm of the gradient vector. Extensive numerical testing results are provided in the paper. Since the low level is a set of independent subproblems, the method is well suited for parallel processing. Furthermore, since convexification terms can be added while maintaining separability of the original problem, the method is promising for nonconvex problems.\",\"PeriodicalId\":6395,\"journal\":{\"name\":\"1988 American Control Conference\",\"volume\":\"52 1\",\"pages\":\"1811-1816\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"1988-06-15\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"5\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"1988 American Control Conference\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.23919/ACC.1988.4790021\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"1988 American Control Conference","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.23919/ACC.1988.4790021","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
This paper studies static optimization with equality constraints by using the mixed coordination method. The idea is to relax equality constraints via Lagrange multipliers, and creat a hierarchy where the Lagrange multipliers and part of the decision variables are selected as high level variables. The method was proposed about ten years ago with a simple high level updating scheme. In this paper we show that this simple updating scheme has a linear convergence rate under appropriate conditions. To obtain faster convergence, the Modified Newton's Method is adopted at the high level. There are two difficulties associated with the Modified Newton's Method. One is how to obtain the Hessian matrix in determining the Newton direction, as second order derivatives of the objective function with respect to all high level variables are needed. The second is when to stop in performing a line search along the Newton direction, as the high level problem is a maxmini problem looking for a saddle point. In this paper, the Hessian matrix is obtained by using a kind of sensitivity analysis. The line search stopping criterion, on the other hand, is based on the norm of the gradient vector. Extensive numerical testing results are provided in the paper. Since the low level is a set of independent subproblems, the method is well suited for parallel processing. Furthermore, since convexification terms can be added while maintaining separability of the original problem, the method is promising for nonconvex problems.