一种新的基于策略迭代自适应动态规划的离散非线性系统自学习最优控制方案

Qinglai Wei, Derong Liu
{"title":"一种新的基于策略迭代自适应动态规划的离散非线性系统自学习最优控制方案","authors":"Qinglai Wei, Derong Liu","doi":"10.3182/20130902-3-CN-3020.00120","DOIUrl":null,"url":null,"abstract":"Abstract In this paper, a new self-learning method using policy iterative adaptive dynamic programming (ADP) is developed to obtain the optimal control scheme of discrete-time nonlinear systems. The iterative ADP algorithm permits an arbitrary admissible control law to initialize the iterative algorithm. It is the first time that the properties of the policy iterative ADP are established for the discrete-time situation. It proves that the iterative performance index function is non-increasingly convergent to the optimal solution of the Hamilton-Jacobi-Bellman (HJB) equation. It also proves that any of the iterative control policy can stabilize the nonlinear systems. Neural networks are used to approximate the performance index function and compute the optimal control policy, respectively, for facilitating the implementation of the iterative ADP algorithm. Finally, a simulation example is given to illustrate the performance of the present method.","PeriodicalId":90521,"journal":{"name":"IEEE International Conference on Systems Biology : [proceedings]. IEEE International Conference on Systems Biology","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2013-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A New Self-Learning Optimal Control Scheme for Discrete-Time Nonlinear Systems Using Policy Iterative Adaptive Dynamic Programming\",\"authors\":\"Qinglai Wei, Derong Liu\",\"doi\":\"10.3182/20130902-3-CN-3020.00120\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Abstract In this paper, a new self-learning method using policy iterative adaptive dynamic programming (ADP) is developed to obtain the optimal control scheme of discrete-time nonlinear systems. The iterative ADP algorithm permits an arbitrary admissible control law to initialize the iterative algorithm. It is the first time that the properties of the policy iterative ADP are established for the discrete-time situation. It proves that the iterative performance index function is non-increasingly convergent to the optimal solution of the Hamilton-Jacobi-Bellman (HJB) equation. It also proves that any of the iterative control policy can stabilize the nonlinear systems. Neural networks are used to approximate the performance index function and compute the optimal control policy, respectively, for facilitating the implementation of the iterative ADP algorithm. Finally, a simulation example is given to illustrate the performance of the present method.\",\"PeriodicalId\":90521,\"journal\":{\"name\":\"IEEE International Conference on Systems Biology : [proceedings]. IEEE International Conference on Systems Biology\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2013-01-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE International Conference on Systems Biology : [proceedings]. IEEE International Conference on Systems Biology\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.3182/20130902-3-CN-3020.00120\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE International Conference on Systems Biology : [proceedings]. IEEE International Conference on Systems Biology","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.3182/20130902-3-CN-3020.00120","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

提出了一种基于策略迭代自适应动态规划(ADP)的自学习方法来求解离散非线性系统的最优控制方案。迭代ADP算法允许任意允许控制律来初始化迭代算法。本文首次建立了离散情况下策略迭代ADP的性质。证明了迭代性能指标函数对Hamilton-Jacobi-Bellman (HJB)方程的最优解具有非渐收敛性。并证明了任意一种迭代控制策略都能使非线性系统稳定。利用神经网络分别逼近性能指标函数和计算最优控制策略,便于迭代ADP算法的实现。最后,通过仿真实例说明了该方法的有效性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
A New Self-Learning Optimal Control Scheme for Discrete-Time Nonlinear Systems Using Policy Iterative Adaptive Dynamic Programming
Abstract In this paper, a new self-learning method using policy iterative adaptive dynamic programming (ADP) is developed to obtain the optimal control scheme of discrete-time nonlinear systems. The iterative ADP algorithm permits an arbitrary admissible control law to initialize the iterative algorithm. It is the first time that the properties of the policy iterative ADP are established for the discrete-time situation. It proves that the iterative performance index function is non-increasingly convergent to the optimal solution of the Hamilton-Jacobi-Bellman (HJB) equation. It also proves that any of the iterative control policy can stabilize the nonlinear systems. Neural networks are used to approximate the performance index function and compute the optimal control policy, respectively, for facilitating the implementation of the iterative ADP algorithm. Finally, a simulation example is given to illustrate the performance of the present method.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信