Barrier Lyapunov Function-Based Safe Reinforcement Learning Algorithm for Autonomous Vehicles with System Uncertainty

Yuxiang Zhang, Xiaoling Liang, S. Ge, B. Gao, Tong-heng Lee
{"title":"Barrier Lyapunov Function-Based Safe Reinforcement Learning Algorithm for Autonomous Vehicles with System Uncertainty","authors":"Yuxiang Zhang, Xiaoling Liang, S. Ge, B. Gao, Tong-heng Lee","doi":"10.23919/ICCAS52745.2021.9649902","DOIUrl":null,"url":null,"abstract":"Guaranteed safety and performance under various circumstances remain technically critical and practically challenging for the wide deployment of autonomous vehicles. For such safety-critical systems, it will certainly be a requirement that safe performance should be ensured even during the reinforcement learning period in the presence of system uncertainty. To address this issue, a Barrier Lyapunov Function-based safe reinforcement learning algorithm (BLF-SRL) is proposed here for the formulated nonlinear system in strict-feedback form. This approach appropriately arranges the Barrier Lyapunov Function item into the optimized backstepping control method to constrain the state-variables in the designed safety region during learning when unknown bounded system uncertainty exists. More specifically, the overall system control is optimized with the optimized backstepping technique under the framework of Actor-Critic, which optimizes the virtual control in every backstepping subsystem. Wherein, the optimal virtual control is decomposed into Barrier Lyapunov Function items; and also with an adaptive item to be learned with deep neural networks, which achieves safe exploration during the learning process. Eventually, the principle of Bellman optimality is satisfied through iteratively updating the independently approximated actor and critic to solve the Hamilton-Jacobi-Bellman equation in adaptive dynamic programming. More notably, the variance of control performance under uncertainty is also reduced with the proposed method. The effectiveness of the proposed method is verified with motion control problems for autonomous vehicles through appropriate comparison simulations.","PeriodicalId":411064,"journal":{"name":"2021 21st International Conference on Control, Automation and Systems (ICCAS)","volume":"39 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-10-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 21st International Conference on Control, Automation and Systems (ICCAS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.23919/ICCAS52745.2021.9649902","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

Abstract

Guaranteed safety and performance under various circumstances remain technically critical and practically challenging for the wide deployment of autonomous vehicles. For such safety-critical systems, it will certainly be a requirement that safe performance should be ensured even during the reinforcement learning period in the presence of system uncertainty. To address this issue, a Barrier Lyapunov Function-based safe reinforcement learning algorithm (BLF-SRL) is proposed here for the formulated nonlinear system in strict-feedback form. This approach appropriately arranges the Barrier Lyapunov Function item into the optimized backstepping control method to constrain the state-variables in the designed safety region during learning when unknown bounded system uncertainty exists. More specifically, the overall system control is optimized with the optimized backstepping technique under the framework of Actor-Critic, which optimizes the virtual control in every backstepping subsystem. Wherein, the optimal virtual control is decomposed into Barrier Lyapunov Function items; and also with an adaptive item to be learned with deep neural networks, which achieves safe exploration during the learning process. Eventually, the principle of Bellman optimality is satisfied through iteratively updating the independently approximated actor and critic to solve the Hamilton-Jacobi-Bellman equation in adaptive dynamic programming. More notably, the variance of control performance under uncertainty is also reduced with the proposed method. The effectiveness of the proposed method is verified with motion control problems for autonomous vehicles through appropriate comparison simulations.
基于Barrier Lyapunov函数的不确定性自动驾驶汽车安全强化学习算法
在各种情况下保证安全性和性能仍然是自动驾驶汽车广泛部署的技术关键和实践挑战。对于这样的安全关键型系统,即使在存在系统不确定性的强化学习期间,也必须确保安全性能。针对这一问题,本文提出了一种基于Barrier Lyapunov函数的严格反馈非线性系统安全强化学习算法(BLF-SRL)。该方法在系统存在未知有界不确定性的情况下,将Barrier Lyapunov函数项适当地安排到优化后的反演控制方法中,在学习过程中将状态变量约束在设计的安全区域内。具体而言,采用Actor-Critic框架下的优化反演技术对系统整体控制进行优化,优化各反演子系统的虚拟控制。其中,将最优虚拟控制分解为Barrier Lyapunov函数项;并利用深度神经网络学习自适应项目,实现了学习过程中的安全探索。最后,通过迭代更新独立逼近的行动者和批评者来求解自适应动态规划中的Hamilton-Jacobi-Bellman方程,从而满足Bellman最优性原则。更值得注意的是,该方法还减少了不确定情况下控制性能的方差。通过适当的对比仿真,验证了该方法的有效性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信