基于不动点变换的NLP自适应最优控制

H. Khan, Á. Szeghegyi, J. Tar
{"title":"基于不动点变换的NLP自适应最优控制","authors":"H. Khan, Á. Szeghegyi, J. Tar","doi":"10.1109/SISY.2017.8080560","DOIUrl":null,"url":null,"abstract":"To reduce the effects of modeling imprécisions, in the traditional “Receding Horizon Control” that works with finite horizon lengths, in the consecutive horizon-length cycles, the actually measured state variable is used as the starting point in the next cycle. In this design, within a horizon-length cycle, a cost function is minimized under a constraint that mathematically represents the dynamic properties of the system under control. In the “Nonlinear Programming” (NLP) approach the state variables as well as the control signals are considered over a discrete time-resolution grid, and the solution is computed by the use of Lagrange's “Reduced Gradient” (RG) method. It provides the “estimated optimal control signals” and the “estimated optimal state variables” over this grid. The controller exerts the estimated control signals but the state variables develop according to the exact dynamics of the system. In this paper an alternative approach is suggested in which, instead of exerting the estimated control signals, the estimated optimized trajectory is adaptively tracked within the given horizon. Simulation investigations are presented for a simple “Linear Time-Invariant” (LTI) model with strongly non-linear cost and terminal cost functions. It is found that the transients of the adaptive controller that appear at the boundaries of the finite-length horizons reduce the available improvement in the tracking precision. In contrast to the traditional RHC, in which decreasing horizon length improves the tracking precision, in our case some increase in the horizon length improves the precision by giving the controller more time to compensate the effects of these transients.","PeriodicalId":352891,"journal":{"name":"2017 IEEE 15th International Symposium on Intelligent Systems and Informatics (SISY)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2017-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"7","resultStr":"{\"title\":\"Fixed point transformation-based adaptive optimal control using NLP\",\"authors\":\"H. Khan, Á. Szeghegyi, J. Tar\",\"doi\":\"10.1109/SISY.2017.8080560\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"To reduce the effects of modeling imprécisions, in the traditional “Receding Horizon Control” that works with finite horizon lengths, in the consecutive horizon-length cycles, the actually measured state variable is used as the starting point in the next cycle. In this design, within a horizon-length cycle, a cost function is minimized under a constraint that mathematically represents the dynamic properties of the system under control. In the “Nonlinear Programming” (NLP) approach the state variables as well as the control signals are considered over a discrete time-resolution grid, and the solution is computed by the use of Lagrange's “Reduced Gradient” (RG) method. It provides the “estimated optimal control signals” and the “estimated optimal state variables” over this grid. The controller exerts the estimated control signals but the state variables develop according to the exact dynamics of the system. In this paper an alternative approach is suggested in which, instead of exerting the estimated control signals, the estimated optimized trajectory is adaptively tracked within the given horizon. Simulation investigations are presented for a simple “Linear Time-Invariant” (LTI) model with strongly non-linear cost and terminal cost functions. It is found that the transients of the adaptive controller that appear at the boundaries of the finite-length horizons reduce the available improvement in the tracking precision. In contrast to the traditional RHC, in which decreasing horizon length improves the tracking precision, in our case some increase in the horizon length improves the precision by giving the controller more time to compensate the effects of these transients.\",\"PeriodicalId\":352891,\"journal\":{\"name\":\"2017 IEEE 15th International Symposium on Intelligent Systems and Informatics (SISY)\",\"volume\":\"24 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2017-09-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"7\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2017 IEEE 15th International Symposium on Intelligent Systems and Informatics (SISY)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/SISY.2017.8080560\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2017 IEEE 15th International Symposium on Intelligent Systems and Informatics (SISY)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SISY.2017.8080560","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 7

摘要

为了减少建模误差的影响,在有限视界长度的传统“后退视界控制”中,在连续的视界长度循环中,实际测量的状态变量被用作下一个循环的起点。在这种设计中,在水平长度周期内,成本函数在数学上表示受控系统的动态特性的约束下最小化。在“非线性规划”(NLP)方法中,将状态变量和控制信号考虑在离散的时间分辨率网格上,并使用拉格朗日的“降阶梯度”(RG)方法计算其解。它提供了该网格上的“估计最优控制信号”和“估计最优状态变量”。控制器施加估计的控制信号,但状态变量根据系统的确切动态发展。本文提出了一种替代方法,该方法不施加估计的控制信号,而是在给定视界内自适应跟踪估计的优化轨迹。对一个简单的具有强非线性代价函数和终端代价函数的“线性时不变”(LTI)模型进行了仿真研究。研究发现,自适应控制器在有限长度视界边界处出现的瞬态降低了跟踪精度的可用提高。与传统的RHC相比,视界长度的减小提高了跟踪精度,在我们的情况下,视界长度的增加通过给控制器更多的时间来补偿这些瞬变的影响来提高精度。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Fixed point transformation-based adaptive optimal control using NLP
To reduce the effects of modeling imprécisions, in the traditional “Receding Horizon Control” that works with finite horizon lengths, in the consecutive horizon-length cycles, the actually measured state variable is used as the starting point in the next cycle. In this design, within a horizon-length cycle, a cost function is minimized under a constraint that mathematically represents the dynamic properties of the system under control. In the “Nonlinear Programming” (NLP) approach the state variables as well as the control signals are considered over a discrete time-resolution grid, and the solution is computed by the use of Lagrange's “Reduced Gradient” (RG) method. It provides the “estimated optimal control signals” and the “estimated optimal state variables” over this grid. The controller exerts the estimated control signals but the state variables develop according to the exact dynamics of the system. In this paper an alternative approach is suggested in which, instead of exerting the estimated control signals, the estimated optimized trajectory is adaptively tracked within the given horizon. Simulation investigations are presented for a simple “Linear Time-Invariant” (LTI) model with strongly non-linear cost and terminal cost functions. It is found that the transients of the adaptive controller that appear at the boundaries of the finite-length horizons reduce the available improvement in the tracking precision. In contrast to the traditional RHC, in which decreasing horizon length improves the tracking precision, in our case some increase in the horizon length improves the precision by giving the controller more time to compensate the effects of these transients.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信