Multi-objective Longitudinal Decision-making for Autonomous Electric Vehicle: A Entropy-constrained Reinforcement Learning Approach

Xiangkun He, Cong Fei, Yulong Liu, Kaiming Yang, Xuewu Ji
{"title":"Multi-objective Longitudinal Decision-making for Autonomous Electric Vehicle: A Entropy-constrained Reinforcement Learning Approach","authors":"Xiangkun He, Cong Fei, Yulong Liu, Kaiming Yang, Xuewu Ji","doi":"10.1109/ITSC45102.2020.9294736","DOIUrl":null,"url":null,"abstract":"The challenging task of “autonomous electric vehicle” opens up a new frontier to improving traffic, saving energy and reducing emission. However, many driving decision-making problems are characterized by multiple competing objectives whose relative importance is dynamic, and that makes developing high-performance decision-making system difficult. Therefore, this paper proposes a novel entropy-constrained reinforcement learning (RL) scheme for multi-objective longitudinal decision-making of autonomous electric vehicle. Firstly, in order to prevent the policy from prematurely converging to a local optimum, the policy’s entropy is embedded in proximal policy optimization (PPO) algorithm based on actor-critic architecture. Secondly, a self-adjusting mechanism to the weight of entropy is developed to accelerate model training and improve algorithm stability through entropy constraint. Thirdly, multimodal reward signals are designed to guide the RL agent learning complex multi-modal driving policies by considering safety, comfort, economy and transport efficiency. Finally, simulation results show that, the proposed longitudinal decision-making approach for autonomous electric vehicle is feasible and effective.","PeriodicalId":394538,"journal":{"name":"2020 IEEE 23rd International Conference on Intelligent Transportation Systems (ITSC)","volume":"106 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"9","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 IEEE 23rd International Conference on Intelligent Transportation Systems (ITSC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ITSC45102.2020.9294736","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 9

Abstract

The challenging task of “autonomous electric vehicle” opens up a new frontier to improving traffic, saving energy and reducing emission. However, many driving decision-making problems are characterized by multiple competing objectives whose relative importance is dynamic, and that makes developing high-performance decision-making system difficult. Therefore, this paper proposes a novel entropy-constrained reinforcement learning (RL) scheme for multi-objective longitudinal decision-making of autonomous electric vehicle. Firstly, in order to prevent the policy from prematurely converging to a local optimum, the policy’s entropy is embedded in proximal policy optimization (PPO) algorithm based on actor-critic architecture. Secondly, a self-adjusting mechanism to the weight of entropy is developed to accelerate model training and improve algorithm stability through entropy constraint. Thirdly, multimodal reward signals are designed to guide the RL agent learning complex multi-modal driving policies by considering safety, comfort, economy and transport efficiency. Finally, simulation results show that, the proposed longitudinal decision-making approach for autonomous electric vehicle is feasible and effective.
自主电动车多目标纵向决策:一种熵约束强化学习方法
“自动驾驶电动汽车”这一具有挑战性的任务为改善交通、节能减排开辟了新的前沿。然而,许多驾驶决策问题具有多个竞争目标的特点,这些目标的相对重要性是动态的,这给开发高性能决策系统带来了困难。为此,本文提出了一种新的基于熵约束的自动驾驶电动汽车多目标纵向决策强化学习方案。首先,为了防止策略过早收敛到局部最优,将策略的熵嵌入到基于actor-critic架构的近端策略优化(PPO)算法中。其次,提出了一种熵权自调整机制,通过熵约束加快模型训练速度,提高算法稳定性;第三,设计多模式奖励信号,引导RL智能体学习复杂的多模式驾驶策略,同时考虑安全性、舒适性、经济性和运输效率。仿真结果表明,所提出的自动驾驶电动汽车纵向决策方法是可行和有效的。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信