Cloud-Based Reinforcement Learning in Automotive Control Function Development

Lucas Koch, Dennis Roeser, Kevin Badalian, Alexander Lieb, Jakob Andert
{"title":"Cloud-Based Reinforcement Learning in Automotive Control Function Development","authors":"Lucas Koch, Dennis Roeser, Kevin Badalian, Alexander Lieb, Jakob Andert","doi":"10.3390/vehicles5030050","DOIUrl":null,"url":null,"abstract":"Automotive control functions are becoming increasingly complex and their development is becoming more and more elaborate, leading to a strong need for automated solutions within the development process. Here, reinforcement learning offers a significant potential for function development to generate optimized control functions in an automated manner. Despite its successful deployment in a variety of control tasks, there is still a lack of standard tooling solutions for function development based on reinforcement learning in the automotive industry. To address this gap, we present a flexible framework that couples the conventional development process with an open-source reinforcement learning library. It features modular, physical models for relevant vehicle components, a co-simulation with a microscopic traffic simulation to generate realistic scenarios, and enables distributed and parallelized training. We demonstrate the effectiveness of our proposed method in a feasibility study to learn a control function for automated longitudinal control of an electric vehicle in an urban traffic scenario. The evolved control strategy produces a smooth trajectory with energy savings of up to 14%. The results highlight the great potential of reinforcement learning for automated control function development and prove the effectiveness of the proposed framework.","PeriodicalId":73282,"journal":{"name":"IEEE Intelligent Vehicles Symposium. IEEE Intelligent Vehicles Symposium","volume":"34 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-08-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Intelligent Vehicles Symposium. IEEE Intelligent Vehicles Symposium","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.3390/vehicles5030050","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

Abstract

Automotive control functions are becoming increasingly complex and their development is becoming more and more elaborate, leading to a strong need for automated solutions within the development process. Here, reinforcement learning offers a significant potential for function development to generate optimized control functions in an automated manner. Despite its successful deployment in a variety of control tasks, there is still a lack of standard tooling solutions for function development based on reinforcement learning in the automotive industry. To address this gap, we present a flexible framework that couples the conventional development process with an open-source reinforcement learning library. It features modular, physical models for relevant vehicle components, a co-simulation with a microscopic traffic simulation to generate realistic scenarios, and enables distributed and parallelized training. We demonstrate the effectiveness of our proposed method in a feasibility study to learn a control function for automated longitudinal control of an electric vehicle in an urban traffic scenario. The evolved control strategy produces a smooth trajectory with energy savings of up to 14%. The results highlight the great potential of reinforcement learning for automated control function development and prove the effectiveness of the proposed framework.
基于云的强化学习在汽车控制功能开发中的应用
汽车控制功能正变得越来越复杂,它们的开发也变得越来越精细,导致在开发过程中对自动化解决方案的强烈需求。在这里,强化学习为函数开发提供了巨大的潜力,以自动化的方式生成优化的控制函数。尽管它成功地部署在各种控制任务中,但在汽车行业中仍然缺乏基于强化学习的功能开发的标准工具解决方案。为了解决这一差距,我们提出了一个灵活的框架,将传统的开发过程与开源的强化学习库结合起来。它具有相关车辆组件的模块化物理模型,与微观交通模拟的联合模拟以生成逼真的场景,并实现分布式和并行训练。我们在可行性研究中证明了我们提出的方法的有效性,以学习城市交通场景中电动汽车自动纵向控制的控制函数。改进的控制策略产生了一个平滑的轨迹,节能高达14%。结果突出了强化学习在自动控制功能开发中的巨大潜力,并证明了所提出框架的有效性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信