Mohammed Basheer Mohiuddin, Igor Boiko, Rana Azzam, Yahya Zweiri
{"title":"深度强化学习控制系统的闭环稳定性分析与实验验证","authors":"Mohammed Basheer Mohiuddin, Igor Boiko, Rana Azzam, Yahya Zweiri","doi":"10.1049/cth2.12712","DOIUrl":null,"url":null,"abstract":"<p>Trained deep reinforcement learning (DRL) based controllers can effectively control dynamic systems where classical controllers can be ineffective and difficult to tune. However, the lack of closed-loop stability guarantees of systems controlled by trained DRL agents hinders their adoption in practical applications. This research study investigates the closed-loop stability of dynamic systems controlled by trained DRL agents using Lyapunov analysis based on a linear-quadratic polynomial approximation of the trained agent. In addition, this work develops an understanding of the system's stability margin to determine operational boundaries and critical thresholds of the system's physical parameters for effective operation. The proposed analysis is verified on a DRL-controlled system for several simulated and experimental scenarios. The DRL agent is trained using a detailed dynamic model of a non-linear system and then tested on the corresponding real-world hardware platform without any fine-tuning. Experiments are conducted on a wide range of system states and physical parameters and the results have confirmed the validity of the proposed stability analysis (https://youtu.be/QlpeD5sTlPU).</p>","PeriodicalId":50382,"journal":{"name":"IET Control Theory and Applications","volume":"18 13","pages":"1649-1668"},"PeriodicalIF":2.2000,"publicationDate":"2024-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/cth2.12712","citationCount":"0","resultStr":"{\"title\":\"Closed-loop stability analysis of deep reinforcement learning controlled systems with experimental validation\",\"authors\":\"Mohammed Basheer Mohiuddin, Igor Boiko, Rana Azzam, Yahya Zweiri\",\"doi\":\"10.1049/cth2.12712\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>Trained deep reinforcement learning (DRL) based controllers can effectively control dynamic systems where classical controllers can be ineffective and difficult to tune. However, the lack of closed-loop stability guarantees of systems controlled by trained DRL agents hinders their adoption in practical applications. This research study investigates the closed-loop stability of dynamic systems controlled by trained DRL agents using Lyapunov analysis based on a linear-quadratic polynomial approximation of the trained agent. In addition, this work develops an understanding of the system's stability margin to determine operational boundaries and critical thresholds of the system's physical parameters for effective operation. The proposed analysis is verified on a DRL-controlled system for several simulated and experimental scenarios. The DRL agent is trained using a detailed dynamic model of a non-linear system and then tested on the corresponding real-world hardware platform without any fine-tuning. Experiments are conducted on a wide range of system states and physical parameters and the results have confirmed the validity of the proposed stability analysis (https://youtu.be/QlpeD5sTlPU).</p>\",\"PeriodicalId\":50382,\"journal\":{\"name\":\"IET Control Theory and Applications\",\"volume\":\"18 13\",\"pages\":\"1649-1668\"},\"PeriodicalIF\":2.2000,\"publicationDate\":\"2024-06-27\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://onlinelibrary.wiley.com/doi/epdf/10.1049/cth2.12712\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IET Control Theory and Applications\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://onlinelibrary.wiley.com/doi/10.1049/cth2.12712\",\"RegionNum\":4,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"AUTOMATION & CONTROL SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IET Control Theory and Applications","FirstCategoryId":"94","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1049/cth2.12712","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"AUTOMATION & CONTROL SYSTEMS","Score":null,"Total":0}
Closed-loop stability analysis of deep reinforcement learning controlled systems with experimental validation
Trained deep reinforcement learning (DRL) based controllers can effectively control dynamic systems where classical controllers can be ineffective and difficult to tune. However, the lack of closed-loop stability guarantees of systems controlled by trained DRL agents hinders their adoption in practical applications. This research study investigates the closed-loop stability of dynamic systems controlled by trained DRL agents using Lyapunov analysis based on a linear-quadratic polynomial approximation of the trained agent. In addition, this work develops an understanding of the system's stability margin to determine operational boundaries and critical thresholds of the system's physical parameters for effective operation. The proposed analysis is verified on a DRL-controlled system for several simulated and experimental scenarios. The DRL agent is trained using a detailed dynamic model of a non-linear system and then tested on the corresponding real-world hardware platform without any fine-tuning. Experiments are conducted on a wide range of system states and physical parameters and the results have confirmed the validity of the proposed stability analysis (https://youtu.be/QlpeD5sTlPU).
期刊介绍:
IET Control Theory & Applications is devoted to control systems in the broadest sense, covering new theoretical results and the applications of new and established control methods. Among the topics of interest are system modelling, identification and simulation, the analysis and design of control systems (including computer-aided design), and practical implementation. The scope encompasses technological, economic, physiological (biomedical) and other systems, including man-machine interfaces.
Most of the papers published deal with original work from industrial and government laboratories and universities, but subject reviews and tutorial expositions of current methods are welcomed. Correspondence discussing published papers is also welcomed.
Applications papers need not necessarily involve new theory. Papers which describe new realisations of established methods, or control techniques applied in a novel situation, or practical studies which compare various designs, would be of interest. Of particular value are theoretical papers which discuss the applicability of new work or applications which engender new theoretical applications.