Reactive optimal motion planning for a class of holonomic planar agents using reinforcement learning with provable guarantees

Panagiotis Rousseas, C. Bechlioulis, K. Kyriakopoulos
{"title":"Reactive optimal motion planning for a class of holonomic planar agents using reinforcement learning with provable guarantees","authors":"Panagiotis Rousseas, C. Bechlioulis, K. Kyriakopoulos","doi":"10.3389/frobt.2023.1255696","DOIUrl":null,"url":null,"abstract":"In control theory, reactive methods have been widely celebrated owing to their success in providing robust, provably convergent solutions to control problems. Even though such methods have long been formulated for motion planning, optimality has largely been left untreated through reactive means, with the community focusing on discrete/graph-based solutions. Although the latter exhibit certain advantages (completeness, complicated state-spaces), the recent rise in Reinforcement Learning (RL), provides novel ways to address the limitations of reactive methods. The goal of this paper is to treat the reactive optimal motion planning problem through an RL framework. A policy iteration RL scheme is formulated in a consistent manner with the control-theoretic results, thus utilizing the advantages of each approach in a complementary way; RL is employed to construct the optimal input without necessitating the solution of a hard, non-linear partial differential equation. Conversely, safety, convergence and policy improvement are guaranteed through control theoretic arguments. The proposed method is validated in simulated synthetic workspaces, and compared against reactive methods as well as a PRM and an RRT⋆ approach. The proposed method outperforms or closely matches the latter methods, indicating the near global optimality of the former, while providing a solution for planning from anywhere within the workspace to the goal position.","PeriodicalId":504612,"journal":{"name":"Frontiers in Robotics and AI","volume":"122 20","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-01-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Frontiers in Robotics and AI","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.3389/frobt.2023.1255696","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

In control theory, reactive methods have been widely celebrated owing to their success in providing robust, provably convergent solutions to control problems. Even though such methods have long been formulated for motion planning, optimality has largely been left untreated through reactive means, with the community focusing on discrete/graph-based solutions. Although the latter exhibit certain advantages (completeness, complicated state-spaces), the recent rise in Reinforcement Learning (RL), provides novel ways to address the limitations of reactive methods. The goal of this paper is to treat the reactive optimal motion planning problem through an RL framework. A policy iteration RL scheme is formulated in a consistent manner with the control-theoretic results, thus utilizing the advantages of each approach in a complementary way; RL is employed to construct the optimal input without necessitating the solution of a hard, non-linear partial differential equation. Conversely, safety, convergence and policy improvement are guaranteed through control theoretic arguments. The proposed method is validated in simulated synthetic workspaces, and compared against reactive methods as well as a PRM and an RRT⋆ approach. The proposed method outperforms or closely matches the latter methods, indicating the near global optimality of the former, while providing a solution for planning from anywhere within the workspace to the goal position.
利用可证明保证的强化学习,对一类整体平面代理进行反应式优化运动规划
在控制理论中,反应式方法成功地为控制问题提供了稳健、可证明收敛的解决方案,因而广受赞誉。尽管这类方法很早就被用于运动规划,但最优性问题在很大程度上仍未通过反应式方法得到解决,业界主要关注基于离散/图的解决方案。虽然后者具有一定的优势(完整性、复杂的状态空间),但最近兴起的强化学习(RL)为解决反应式方法的局限性提供了新的途径。本文的目标是通过 RL 框架来处理反应式最优运动规划问题。本文以与控制理论结果一致的方式制定了策略迭代 RL 方案,从而以互补的方式利用了每种方法的优势;采用 RL 构建最优输入,无需求解困难的非线性偏微分方程。相反,通过控制理论的论证,安全性、收敛性和策略改进都得到了保证。提出的方法在模拟合成工作空间中得到了验证,并与反应式方法以及 PRM 和 RRT⋆ 方法进行了比较。提出的方法优于或接近后者,表明前者接近全局最优,同时提供了从工作区内任何位置到目标位置的规划解决方案。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信