Safe Q-Learning for Data-Driven Nonlinear Optimal Control with Asymmetric State Constraints

IF 15.3 1区 计算机科学 Q1 AUTOMATION & CONTROL SYSTEMS
Mingming Zhao;Ding Wang;Shijie Song;Junfei Qiao
{"title":"Safe Q-Learning for Data-Driven Nonlinear Optimal Control with Asymmetric State Constraints","authors":"Mingming Zhao;Ding Wang;Shijie Song;Junfei Qiao","doi":"10.1109/JAS.2024.124509","DOIUrl":null,"url":null,"abstract":"This article develops a novel data-driven safe Q-learning method to design the safe optimal controller which can guarantee constrained states of nonlinear systems always stay in the safe region while providing an optimal performance. First, we design an augmented utility function consisting of an adjustable positive definite control obstacle function and a quadratic form of the next state to ensure the safety and optimality. Second, by exploiting a pre-designed admissible policy for initialization, an off-policy stabilizing value iteration Q-learning (SVIQL) algorithm is presented to seek the safe optimal policy by using offline data within the safe region rather than the mathematical model. Third, the monotonicity, safety, and optimality of the SVIQL algorithm are theoretically proven. To obtain the initial admissible policy for SVIQL, an offline VIQL algorithm with zero initialization is constructed and a new admissibility criterion is established for immature iterative policies. Moreover, the critic and action networks with precise approximation ability are established to promote the operation of VIQL and SVIQL algorithms. Finally, three simulation experiments are conducted to demonstrate the virtue and superiority of the developed safe Q-learning method.","PeriodicalId":54230,"journal":{"name":"Ieee-Caa Journal of Automatica Sinica","volume":"11 12","pages":"2408-2422"},"PeriodicalIF":15.3000,"publicationDate":"2024-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Ieee-Caa Journal of Automatica Sinica","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10664188/","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"AUTOMATION & CONTROL SYSTEMS","Score":null,"Total":0}
引用次数: 0

Abstract

This article develops a novel data-driven safe Q-learning method to design the safe optimal controller which can guarantee constrained states of nonlinear systems always stay in the safe region while providing an optimal performance. First, we design an augmented utility function consisting of an adjustable positive definite control obstacle function and a quadratic form of the next state to ensure the safety and optimality. Second, by exploiting a pre-designed admissible policy for initialization, an off-policy stabilizing value iteration Q-learning (SVIQL) algorithm is presented to seek the safe optimal policy by using offline data within the safe region rather than the mathematical model. Third, the monotonicity, safety, and optimality of the SVIQL algorithm are theoretically proven. To obtain the initial admissible policy for SVIQL, an offline VIQL algorithm with zero initialization is constructed and a new admissibility criterion is established for immature iterative policies. Moreover, the critic and action networks with precise approximation ability are established to promote the operation of VIQL and SVIQL algorithms. Finally, three simulation experiments are conducted to demonstrate the virtue and superiority of the developed safe Q-learning method.
具有非对称状态约束的数据驱动非线性优化控制的安全 Q 学习
本文开发了一种新颖的数据驱动安全 Q-learning 方法来设计安全最优控制器,它能保证非线性系统的约束状态始终处于安全区域,同时提供最优性能。首先,我们设计了一个由可调正定控制障碍函数和下一状态二次方形式组成的增强效用函数,以确保安全性和最优性。其次,通过利用预先设计的可接受策略进行初始化,提出了一种非策略稳定值迭代 Q-learning 算法(SVIQL),利用安全区域内的离线数据而不是数学模型来寻求安全的最优策略。第三,从理论上证明了 SVIQL 算法的单调性、安全性和最优性。为了获得 SVIQL 的初始可接纳策略,构建了一个零初始化的离线 VIQL 算法,并为不成熟的迭代策略建立了一个新的可接纳准则。此外,还建立了具有精确逼近能力的批评者和行动网络,以促进 VIQL 和 SVIQL 算法的运行。最后,通过三个仿真实验证明了所开发的安全 Q-learning 方法的优点和优越性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Ieee-Caa Journal of Automatica Sinica
Ieee-Caa Journal of Automatica Sinica Engineering-Control and Systems Engineering
CiteScore
23.50
自引率
11.00%
发文量
880
期刊介绍: The IEEE/CAA Journal of Automatica Sinica is a reputable journal that publishes high-quality papers in English on original theoretical/experimental research and development in the field of automation. The journal covers a wide range of topics including automatic control, artificial intelligence and intelligent control, systems theory and engineering, pattern recognition and intelligent systems, automation engineering and applications, information processing and information systems, network-based automation, robotics, sensing and measurement, and navigation, guidance, and control. Additionally, the journal is abstracted/indexed in several prominent databases including SCIE (Science Citation Index Expanded), EI (Engineering Index), Inspec, Scopus, SCImago, DBLP, CNKI (China National Knowledge Infrastructure), CSCD (Chinese Science Citation Database), and IEEE Xplore.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信