{"title":"A New Self-Learning Optimal Control Scheme for Discrete-Time Nonlinear Systems Using Policy Iterative Adaptive Dynamic Programming","authors":"Qinglai Wei, Derong Liu","doi":"10.3182/20130902-3-CN-3020.00120","DOIUrl":null,"url":null,"abstract":"Abstract In this paper, a new self-learning method using policy iterative adaptive dynamic programming (ADP) is developed to obtain the optimal control scheme of discrete-time nonlinear systems. The iterative ADP algorithm permits an arbitrary admissible control law to initialize the iterative algorithm. It is the first time that the properties of the policy iterative ADP are established for the discrete-time situation. It proves that the iterative performance index function is non-increasingly convergent to the optimal solution of the Hamilton-Jacobi-Bellman (HJB) equation. It also proves that any of the iterative control policy can stabilize the nonlinear systems. Neural networks are used to approximate the performance index function and compute the optimal control policy, respectively, for facilitating the implementation of the iterative ADP algorithm. Finally, a simulation example is given to illustrate the performance of the present method.","PeriodicalId":90521,"journal":{"name":"IEEE International Conference on Systems Biology : [proceedings]. IEEE International Conference on Systems Biology","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2013-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE International Conference on Systems Biology : [proceedings]. IEEE International Conference on Systems Biology","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.3182/20130902-3-CN-3020.00120","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Abstract In this paper, a new self-learning method using policy iterative adaptive dynamic programming (ADP) is developed to obtain the optimal control scheme of discrete-time nonlinear systems. The iterative ADP algorithm permits an arbitrary admissible control law to initialize the iterative algorithm. It is the first time that the properties of the policy iterative ADP are established for the discrete-time situation. It proves that the iterative performance index function is non-increasingly convergent to the optimal solution of the Hamilton-Jacobi-Bellman (HJB) equation. It also proves that any of the iterative control policy can stabilize the nonlinear systems. Neural networks are used to approximate the performance index function and compute the optimal control policy, respectively, for facilitating the implementation of the iterative ADP algorithm. Finally, a simulation example is given to illustrate the performance of the present method.