Engineering applications of artificial intelligence a knowledge-guided reinforcement learning method for lateral path tracking

IF 7.5 2区 计算机科学 Q1 AUTOMATION & CONTROL SYSTEMS
Bo Hu , Sunan Zhang , Yuxiang Feng , Bingbing Li , Hao Sun , Mingyang Chen , Weichao Zhuang , Yi Zhang
{"title":"Engineering applications of artificial intelligence a knowledge-guided reinforcement learning method for lateral path tracking","authors":"Bo Hu ,&nbsp;Sunan Zhang ,&nbsp;Yuxiang Feng ,&nbsp;Bingbing Li ,&nbsp;Hao Sun ,&nbsp;Mingyang Chen ,&nbsp;Weichao Zhuang ,&nbsp;Yi Zhang","doi":"10.1016/j.engappai.2024.109588","DOIUrl":null,"url":null,"abstract":"<div><div>Lateral Control algorithms in autonomous vehicles often necessitates an online fine-tuning procedure in the real world. While reinforcement learning (RL) enables vehicles to learn and improve the lateral control performance through repeated trial and error interactions with a dynamic environment, applying RL directly to safety-critical applications in real physical world is challenging because ensuring safety during the learning process remains difficult. To enable safe learning, a promising direction is to make use of previously gathered offline data, which is frequently accessible in engineering applications. In this context, this paper presents a set of knowledge-guided RL algorithms that can not only fully leverage the prior collected offline data without the need of a physics-based simulator, but also allow further online policy improvement in a smooth, safe and efficient manner. To evaluate the effectiveness of the proposed algorithms on a real controller, a hardware-in-the-loop and a miniature vehicle platform are built. Compared with the vanilla RL, behavior cloning and the existing controller, the proposed algorithms realize a closed-loop solution for lateral control problems from offline training to online fine-tuning, making it attractive for future similar RL-based controller to build upon.</div></div>","PeriodicalId":50523,"journal":{"name":"Engineering Applications of Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":7.5000,"publicationDate":"2024-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Engineering Applications of Artificial Intelligence","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0952197624017469","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"AUTOMATION & CONTROL SYSTEMS","Score":null,"Total":0}
引用次数: 0

Abstract

Lateral Control algorithms in autonomous vehicles often necessitates an online fine-tuning procedure in the real world. While reinforcement learning (RL) enables vehicles to learn and improve the lateral control performance through repeated trial and error interactions with a dynamic environment, applying RL directly to safety-critical applications in real physical world is challenging because ensuring safety during the learning process remains difficult. To enable safe learning, a promising direction is to make use of previously gathered offline data, which is frequently accessible in engineering applications. In this context, this paper presents a set of knowledge-guided RL algorithms that can not only fully leverage the prior collected offline data without the need of a physics-based simulator, but also allow further online policy improvement in a smooth, safe and efficient manner. To evaluate the effectiveness of the proposed algorithms on a real controller, a hardware-in-the-loop and a miniature vehicle platform are built. Compared with the vanilla RL, behavior cloning and the existing controller, the proposed algorithms realize a closed-loop solution for lateral control problems from offline training to online fine-tuning, making it attractive for future similar RL-based controller to build upon.
人工智能的工程应用 一种用于横向路径跟踪的知识引导强化学习方法
自动驾驶车辆的横向控制算法通常需要在真实世界中进行在线微调。虽然强化学习(RL)使车辆能够通过与动态环境的反复试验和错误互动来学习和改进横向控制性能,但将 RL 直接应用于真实物理世界中的安全关键型应用仍具有挑战性,因为在学习过程中确保安全仍然很困难。为了实现安全学习,一个很有前途的方向是利用以前收集的离线数据,这些数据在工程应用中经常可以获得。在此背景下,本文提出了一套知识引导的 RL 算法,不仅可以充分利用先前收集的离线数据,而无需基于物理的模拟器,还能以平稳、安全和高效的方式进一步改进在线策略。为了评估所提出的算法在实际控制器上的有效性,我们构建了一个硬件在环和一个微型车辆平台。与虚RL、行为克隆和现有控制器相比,所提出的算法实现了从离线训练到在线微调的横向控制问题闭环解决方案,这对未来类似的基于RL的控制器具有很大的吸引力。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Engineering Applications of Artificial Intelligence
Engineering Applications of Artificial Intelligence 工程技术-工程:电子与电气
CiteScore
9.60
自引率
10.00%
发文量
505
审稿时长
68 days
期刊介绍: Artificial Intelligence (AI) is pivotal in driving the fourth industrial revolution, witnessing remarkable advancements across various machine learning methodologies. AI techniques have become indispensable tools for practicing engineers, enabling them to tackle previously insurmountable challenges. Engineering Applications of Artificial Intelligence serves as a global platform for the swift dissemination of research elucidating the practical application of AI methods across all engineering disciplines. Submitted papers are expected to present novel aspects of AI utilized in real-world engineering applications, validated using publicly available datasets to ensure the replicability of research outcomes. Join us in exploring the transformative potential of AI in engineering.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信