{"title":"基于自适应因子评价的无漂移非线性系统最优调节","authors":"Ashwin P. Dani;Shubhendu Bhasin","doi":"10.1109/OJCSYS.2025.3552999","DOIUrl":null,"url":null,"abstract":"In this paper, a continuous-time adaptive actor-critic reinforcement learning (RL) controller is developed for drift-free uncertain nonlinear systems. Practical examples of such systems are image-based visual servoing (IBVS) and wheeled mobile robots (WMR), where the system dynamics include a parametric uncertainty in the control effectiveness matrix with no drift term. The uncertainty in the input term poses a challenge when developing a continuous-time RL controller using existing methods. This paper presents an actor-critic/synchronous policy iteration (PI)-based RL controller with a newly derived constrained concurrent learning (CCL)-based parameter update law for estimating the unknown parameters of the linearly parametrized control effectiveness matrix. The parameter update law ensures that the parameters do not converge to <inline-formula><tex-math>$zero$</tex-math></inline-formula>, avoiding possible loss of stabilization. An infinite-horizon value function minimization objective is achieved by regulating the current states to the desired with near-optimal control efforts. The proposed controller guarantees closed-loop stability, and simulation results in the presence of noise validate the proposed theory using IBVS and WMR examples.","PeriodicalId":73299,"journal":{"name":"IEEE open journal of control systems","volume":"4 ","pages":"117-129"},"PeriodicalIF":0.0000,"publicationDate":"2025-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10932715","citationCount":"0","resultStr":"{\"title\":\"Adaptive Actor-Critic Based Optimal Regulation for Drift-Free Nonlinear Systems\",\"authors\":\"Ashwin P. Dani;Shubhendu Bhasin\",\"doi\":\"10.1109/OJCSYS.2025.3552999\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In this paper, a continuous-time adaptive actor-critic reinforcement learning (RL) controller is developed for drift-free uncertain nonlinear systems. Practical examples of such systems are image-based visual servoing (IBVS) and wheeled mobile robots (WMR), where the system dynamics include a parametric uncertainty in the control effectiveness matrix with no drift term. The uncertainty in the input term poses a challenge when developing a continuous-time RL controller using existing methods. This paper presents an actor-critic/synchronous policy iteration (PI)-based RL controller with a newly derived constrained concurrent learning (CCL)-based parameter update law for estimating the unknown parameters of the linearly parametrized control effectiveness matrix. The parameter update law ensures that the parameters do not converge to <inline-formula><tex-math>$zero$</tex-math></inline-formula>, avoiding possible loss of stabilization. An infinite-horizon value function minimization objective is achieved by regulating the current states to the desired with near-optimal control efforts. The proposed controller guarantees closed-loop stability, and simulation results in the presence of noise validate the proposed theory using IBVS and WMR examples.\",\"PeriodicalId\":73299,\"journal\":{\"name\":\"IEEE open journal of control systems\",\"volume\":\"4 \",\"pages\":\"117-129\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2025-03-18\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10932715\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE open journal of control systems\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10932715/\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE open journal of control systems","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10932715/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Adaptive Actor-Critic Based Optimal Regulation for Drift-Free Nonlinear Systems
In this paper, a continuous-time adaptive actor-critic reinforcement learning (RL) controller is developed for drift-free uncertain nonlinear systems. Practical examples of such systems are image-based visual servoing (IBVS) and wheeled mobile robots (WMR), where the system dynamics include a parametric uncertainty in the control effectiveness matrix with no drift term. The uncertainty in the input term poses a challenge when developing a continuous-time RL controller using existing methods. This paper presents an actor-critic/synchronous policy iteration (PI)-based RL controller with a newly derived constrained concurrent learning (CCL)-based parameter update law for estimating the unknown parameters of the linearly parametrized control effectiveness matrix. The parameter update law ensures that the parameters do not converge to $zero$, avoiding possible loss of stabilization. An infinite-horizon value function minimization objective is achieved by regulating the current states to the desired with near-optimal control efforts. The proposed controller guarantees closed-loop stability, and simulation results in the presence of noise validate the proposed theory using IBVS and WMR examples.