Optimization with the Mixed Coordination Method

Jianxin Tang, P. Luh, T. Chang
{"title":"Optimization with the Mixed Coordination Method","authors":"Jianxin Tang, P. Luh, T. Chang","doi":"10.23919/ACC.1988.4790021","DOIUrl":null,"url":null,"abstract":"This paper studies static optimization with equality constraints by using the mixed coordination method. The idea is to relax equality constraints via Lagrange multipliers, and creat a hierarchy where the Lagrange multipliers and part of the decision variables are selected as high level variables. The method was proposed about ten years ago with a simple high level updating scheme. In this paper we show that this simple updating scheme has a linear convergence rate under appropriate conditions. To obtain faster convergence, the Modified Newton's Method is adopted at the high level. There are two difficulties associated with the Modified Newton's Method. One is how to obtain the Hessian matrix in determining the Newton direction, as second order derivatives of the objective function with respect to all high level variables are needed. The second is when to stop in performing a line search along the Newton direction, as the high level problem is a maxmini problem looking for a saddle point. In this paper, the Hessian matrix is obtained by using a kind of sensitivity analysis. The line search stopping criterion, on the other hand, is based on the norm of the gradient vector. Extensive numerical testing results are provided in the paper. Since the low level is a set of independent subproblems, the method is well suited for parallel processing. Furthermore, since convexification terms can be added while maintaining separability of the original problem, the method is promising for nonconvex problems.","PeriodicalId":6395,"journal":{"name":"1988 American Control Conference","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"1988-06-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"1988 American Control Conference","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.23919/ACC.1988.4790021","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 5

Abstract

This paper studies static optimization with equality constraints by using the mixed coordination method. The idea is to relax equality constraints via Lagrange multipliers, and creat a hierarchy where the Lagrange multipliers and part of the decision variables are selected as high level variables. The method was proposed about ten years ago with a simple high level updating scheme. In this paper we show that this simple updating scheme has a linear convergence rate under appropriate conditions. To obtain faster convergence, the Modified Newton's Method is adopted at the high level. There are two difficulties associated with the Modified Newton's Method. One is how to obtain the Hessian matrix in determining the Newton direction, as second order derivatives of the objective function with respect to all high level variables are needed. The second is when to stop in performing a line search along the Newton direction, as the high level problem is a maxmini problem looking for a saddle point. In this paper, the Hessian matrix is obtained by using a kind of sensitivity analysis. The line search stopping criterion, on the other hand, is based on the norm of the gradient vector. Extensive numerical testing results are provided in the paper. Since the low level is a set of independent subproblems, the method is well suited for parallel processing. Furthermore, since convexification terms can be added while maintaining separability of the original problem, the method is promising for nonconvex problems.
混合协调优化方法
本文用混合协调方法研究了等式约束下的静态优化问题。其思想是通过拉格朗日乘数来放松等式约束,并创建一个层次结构,其中拉格朗日乘数和部分决策变量被选为高级变量。该方法是在大约十年前提出的,带有一个简单的高层更新方案。在适当的条件下,我们证明了这种简单的更新方案具有线性收敛速度。为了获得更快的收敛速度,在高层采用了修正牛顿法。修正牛顿法有两个困难。一是在确定牛顿方向时如何得到黑森矩阵,因为需要对所有高级变量求目标函数的二阶导数。第二个问题是沿着牛顿方向执行直线搜索时何时停止,因为高级问题是寻找鞍点的maxmini问题。本文采用一种灵敏度分析方法得到了Hessian矩阵。另一方面,线搜索停止准则是基于梯度向量的范数。文中提供了大量的数值试验结果。由于低层次是一组独立的子问题,因此该方法非常适合并行处理。此外,由于可以在保持原问题可分性的同时添加凸化项,因此该方法适用于非凸问题。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信