具有约束输入的未知非线性系统的在线自适应数据驱动控制

C. Liu, Zhousheng Chu, Yalun Li
{"title":"具有约束输入的未知非线性系统的在线自适应数据驱动控制","authors":"C. Liu, Zhousheng Chu, Yalun Li","doi":"10.1109/ICCSIE55183.2023.10175236","DOIUrl":null,"url":null,"abstract":"In this paper, we discuss a novel algorithm for learning the solution to the optimal control problem (OCP) for affine nonlinear continuous-time constrained-input systems with completely unknown dynamics on a data-driven integral reinforcement learning (IRL) basis. It is well known that we have to obtain the solution of the nonlinear OCP by means of resolving the Hamilton-Jacobi-Bellman equation (HJBE). However, the HJBE is usually a nonlinear partial differential equation that cannot be solved analytically. To make matters worse, most practical systems are too complex to be accurately mathematically modelled and have real-time errors in the system’s controller. To address the above issues, we propose an online data-driven IRL algorithm that is anchored in policy iteration (PI), using real-time data from practical systems, rather than system models or partially sampled data from systems. To begin with, the PI algorithm is shown. Then, we approximate the performance function and the control policy using a critic neural network (CNN) and an actor neural network (ANN), respectively. The approach presented is an online-policy IRL, where the data are continuously sampled in the input and state domains. The weights of the CNN and ANN are renewed by least squares using the collected data, which minimizes residual errors. Finally, the validity of the approach in solving the OCP is demonstrated from the simulation results.","PeriodicalId":391372,"journal":{"name":"2022 First International Conference on Cyber-Energy Systems and Intelligent Energy (ICCSIE)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Online adaptive data-driven control for unknown nonlinear systems with constrained-input\",\"authors\":\"C. Liu, Zhousheng Chu, Yalun Li\",\"doi\":\"10.1109/ICCSIE55183.2023.10175236\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In this paper, we discuss a novel algorithm for learning the solution to the optimal control problem (OCP) for affine nonlinear continuous-time constrained-input systems with completely unknown dynamics on a data-driven integral reinforcement learning (IRL) basis. It is well known that we have to obtain the solution of the nonlinear OCP by means of resolving the Hamilton-Jacobi-Bellman equation (HJBE). However, the HJBE is usually a nonlinear partial differential equation that cannot be solved analytically. To make matters worse, most practical systems are too complex to be accurately mathematically modelled and have real-time errors in the system’s controller. To address the above issues, we propose an online data-driven IRL algorithm that is anchored in policy iteration (PI), using real-time data from practical systems, rather than system models or partially sampled data from systems. To begin with, the PI algorithm is shown. Then, we approximate the performance function and the control policy using a critic neural network (CNN) and an actor neural network (ANN), respectively. The approach presented is an online-policy IRL, where the data are continuously sampled in the input and state domains. The weights of the CNN and ANN are renewed by least squares using the collected data, which minimizes residual errors. Finally, the validity of the approach in solving the OCP is demonstrated from the simulation results.\",\"PeriodicalId\":391372,\"journal\":{\"name\":\"2022 First International Conference on Cyber-Energy Systems and Intelligent Energy (ICCSIE)\",\"volume\":\"1 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-01-14\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 First International Conference on Cyber-Energy Systems and Intelligent Energy (ICCSIE)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICCSIE55183.2023.10175236\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 First International Conference on Cyber-Energy Systems and Intelligent Energy (ICCSIE)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICCSIE55183.2023.10175236","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

本文在数据驱动的积分强化学习(IRL)的基础上,讨论了一种学习具有完全未知动态的仿射非线性连续时间约束输入系统最优控制问题(OCP)解的新算法。众所周知,我们必须通过求解Hamilton-Jacobi-Bellman方程(HJBE)来获得非线性OCP的解。然而,HJBE通常是一个不能解析求解的非线性偏微分方程。更糟糕的是,大多数实际系统过于复杂,无法精确地进行数学建模,并且在系统控制器中存在实时误差。为了解决上述问题,我们提出了一种在线数据驱动的IRL算法,该算法锚定在策略迭代(PI)中,使用来自实际系统的实时数据,而不是系统模型或来自系统的部分采样数据。首先,给出PI算法。然后,我们分别使用评论家神经网络(CNN)和行动者神经网络(ANN)近似性能函数和控制策略。所提出的方法是一种在线策略IRL,其中数据在输入域和状态域中连续采样。利用采集到的数据,用最小二乘法更新CNN和ANN的权值,使残差最小。最后,通过仿真结果验证了该方法求解OCP的有效性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Online adaptive data-driven control for unknown nonlinear systems with constrained-input
In this paper, we discuss a novel algorithm for learning the solution to the optimal control problem (OCP) for affine nonlinear continuous-time constrained-input systems with completely unknown dynamics on a data-driven integral reinforcement learning (IRL) basis. It is well known that we have to obtain the solution of the nonlinear OCP by means of resolving the Hamilton-Jacobi-Bellman equation (HJBE). However, the HJBE is usually a nonlinear partial differential equation that cannot be solved analytically. To make matters worse, most practical systems are too complex to be accurately mathematically modelled and have real-time errors in the system’s controller. To address the above issues, we propose an online data-driven IRL algorithm that is anchored in policy iteration (PI), using real-time data from practical systems, rather than system models or partially sampled data from systems. To begin with, the PI algorithm is shown. Then, we approximate the performance function and the control policy using a critic neural network (CNN) and an actor neural network (ANN), respectively. The approach presented is an online-policy IRL, where the data are continuously sampled in the input and state domains. The weights of the CNN and ANN are renewed by least squares using the collected data, which minimizes residual errors. Finally, the validity of the approach in solving the OCP is demonstrated from the simulation results.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信