{"title":"Disturbance estimator-based reinforcement learning robust stabilization control for a class of chaotic systems","authors":"Keyi Li , Hongsheng Sha , Rongwei Guo","doi":"10.1016/j.chaos.2025.116547","DOIUrl":null,"url":null,"abstract":"<div><div>In the study, a novel optimal control tactics is developed for the stabilization of a class of chaotic systems. This strategy is depended on the positive gradient descent training mode and provides a critic-actor reinforcement learning (RL) algorithm, where the critic network is accustomed to approximate the nonlinear Hamilton–Jacobi–Bellman equation obtained from the outstanding performance evaluation index function with model uncertainties. The optimal controller is obtained by a network of actors, which includes a disturbance estimator (DE) as an observer composed of specially designed filters that can accurately suppress specified external disturbances. The entire system optimization process does not require persistent excitation (PE) of signal input. Then, a Lyapunov analysis method is provided to give a comprehensive assessment of system stability and optimal control performance. Last, the efficacy of the proposed controller approach is confirmed through simulation experiments.</div></div>","PeriodicalId":9764,"journal":{"name":"Chaos Solitons & Fractals","volume":"198 ","pages":"Article 116547"},"PeriodicalIF":5.3000,"publicationDate":"2025-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Chaos Solitons & Fractals","FirstCategoryId":"100","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0960077925005600","RegionNum":1,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"MATHEMATICS, INTERDISCIPLINARY APPLICATIONS","Score":null,"Total":0}
引用次数: 0
Abstract
In the study, a novel optimal control tactics is developed for the stabilization of a class of chaotic systems. This strategy is depended on the positive gradient descent training mode and provides a critic-actor reinforcement learning (RL) algorithm, where the critic network is accustomed to approximate the nonlinear Hamilton–Jacobi–Bellman equation obtained from the outstanding performance evaluation index function with model uncertainties. The optimal controller is obtained by a network of actors, which includes a disturbance estimator (DE) as an observer composed of specially designed filters that can accurately suppress specified external disturbances. The entire system optimization process does not require persistent excitation (PE) of signal input. Then, a Lyapunov analysis method is provided to give a comprehensive assessment of system stability and optimal control performance. Last, the efficacy of the proposed controller approach is confirmed through simulation experiments.
期刊介绍:
Chaos, Solitons & Fractals strives to establish itself as a premier journal in the interdisciplinary realm of Nonlinear Science, Non-equilibrium, and Complex Phenomena. It welcomes submissions covering a broad spectrum of topics within this field, including dynamics, non-equilibrium processes in physics, chemistry, and geophysics, complex matter and networks, mathematical models, computational biology, applications to quantum and mesoscopic phenomena, fluctuations and random processes, self-organization, and social phenomena.