基于元学习和硬样本挖掘的多动力总成汽车统一深度强化学习能量管理策略

IF 5.4 2区 计算机科学 Q1 AUTOMATION & CONTROL SYSTEMS
Xiaokai Chen , Zhiming Wu , Hamid Reza Karimi , Qianhui Li , Zhengyu Li
{"title":"基于元学习和硬样本挖掘的多动力总成汽车统一深度强化学习能量管理策略","authors":"Xiaokai Chen ,&nbsp;Zhiming Wu ,&nbsp;Hamid Reza Karimi ,&nbsp;Qianhui Li ,&nbsp;Zhengyu Li","doi":"10.1016/j.conengprac.2025.106396","DOIUrl":null,"url":null,"abstract":"<div><div>Hybrid electric vehicles (HEVs) encompass diverse powertrain configurations and serve varied purposes. Commonly, energy management strategies (EMSs) have been developed separately for individual vehicle types and powertrain configurations under specific operating scenarios, often lacking generalizability across vehicle models and operating scenarios. To fill this gap, we propose a unified deep reinforcement learning (DRL) EMS based on meta-learning and online hard sample mining. This strategy enables adaptation to diverse vehicle types and powertrain configurations with minimal sample training through online fine-tuning. Firstly, meta-reinforcement learning is employed to simultaneously learn EMS for multiple vehicle types across various operating scenarios, establishing a base-learner capable of achieving satisfactory performance with minor adjustments when confronted with new configurations and operating scenarios. Furthermore, to mitigate the slow convergence associated with training multiple vehicle types and operating scenarios concurrently, hard sample mining method is used to optimize the presentation of random operating scenarios during training. This entails recording poorly performing conditions during training and prioritizing the training of simpler conditions before advancing to more challenging ones, thereby enhancing training efficiency through a scientifically informed approach. Additionally, we validate the proposed EMS on a simulated vehicle emulator. Results demonstrate a significant improvement in convergence efficiency, with respective enhancements of 40% in convergence efficiency while achieving comparable final performance metrics.</div></div>","PeriodicalId":50615,"journal":{"name":"Control Engineering Practice","volume":"163 ","pages":"Article 106396"},"PeriodicalIF":5.4000,"publicationDate":"2025-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A unified deep reinforcement learning energy management strategy for multi-powertrain vehicles based on meta learning and hard sample mining\",\"authors\":\"Xiaokai Chen ,&nbsp;Zhiming Wu ,&nbsp;Hamid Reza Karimi ,&nbsp;Qianhui Li ,&nbsp;Zhengyu Li\",\"doi\":\"10.1016/j.conengprac.2025.106396\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Hybrid electric vehicles (HEVs) encompass diverse powertrain configurations and serve varied purposes. Commonly, energy management strategies (EMSs) have been developed separately for individual vehicle types and powertrain configurations under specific operating scenarios, often lacking generalizability across vehicle models and operating scenarios. To fill this gap, we propose a unified deep reinforcement learning (DRL) EMS based on meta-learning and online hard sample mining. This strategy enables adaptation to diverse vehicle types and powertrain configurations with minimal sample training through online fine-tuning. Firstly, meta-reinforcement learning is employed to simultaneously learn EMS for multiple vehicle types across various operating scenarios, establishing a base-learner capable of achieving satisfactory performance with minor adjustments when confronted with new configurations and operating scenarios. Furthermore, to mitigate the slow convergence associated with training multiple vehicle types and operating scenarios concurrently, hard sample mining method is used to optimize the presentation of random operating scenarios during training. This entails recording poorly performing conditions during training and prioritizing the training of simpler conditions before advancing to more challenging ones, thereby enhancing training efficiency through a scientifically informed approach. Additionally, we validate the proposed EMS on a simulated vehicle emulator. Results demonstrate a significant improvement in convergence efficiency, with respective enhancements of 40% in convergence efficiency while achieving comparable final performance metrics.</div></div>\",\"PeriodicalId\":50615,\"journal\":{\"name\":\"Control Engineering Practice\",\"volume\":\"163 \",\"pages\":\"Article 106396\"},\"PeriodicalIF\":5.4000,\"publicationDate\":\"2025-05-20\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Control Engineering Practice\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0967066125001595\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"AUTOMATION & CONTROL SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Control Engineering Practice","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0967066125001595","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"AUTOMATION & CONTROL SYSTEMS","Score":null,"Total":0}
引用次数: 0

摘要

混合动力电动汽车(hev)包含不同的动力系统配置,用于不同的目的。通常,能源管理策略(ems)是针对特定操作场景下的各个车型和动力系统配置单独开发的,通常缺乏跨车型和操作场景的通用性。为了填补这一空白,我们提出了一种基于元学习和在线硬样本挖掘的统一深度强化学习(DRL) EMS。该策略通过在线微调,以最少的样本训练适应不同的车型和动力系统配置。首先,采用元强化学习的方法,对不同工况下的多种车型同时进行EMS学习,建立一个在面对新配置和新工况时只需微调即可获得满意性能的基础学习者。此外,为了缓解同时训练多个车辆类型和操作场景所带来的缓慢收敛问题,采用硬样本挖掘方法优化训练过程中随机操作场景的表示。这需要在训练过程中记录表现不佳的条件,并在进入更具挑战性的条件之前优先考虑简单条件的训练,从而通过科学的方法提高训练效率。此外,我们在仿真车辆模拟器上验证了所提出的EMS。结果表明,收敛效率显著提高,在实现可比的最终性能指标的同时,收敛效率分别提高了40%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。

A unified deep reinforcement learning energy management strategy for multi-powertrain vehicles based on meta learning and hard sample mining

A unified deep reinforcement learning energy management strategy for multi-powertrain vehicles based on meta learning and hard sample mining
Hybrid electric vehicles (HEVs) encompass diverse powertrain configurations and serve varied purposes. Commonly, energy management strategies (EMSs) have been developed separately for individual vehicle types and powertrain configurations under specific operating scenarios, often lacking generalizability across vehicle models and operating scenarios. To fill this gap, we propose a unified deep reinforcement learning (DRL) EMS based on meta-learning and online hard sample mining. This strategy enables adaptation to diverse vehicle types and powertrain configurations with minimal sample training through online fine-tuning. Firstly, meta-reinforcement learning is employed to simultaneously learn EMS for multiple vehicle types across various operating scenarios, establishing a base-learner capable of achieving satisfactory performance with minor adjustments when confronted with new configurations and operating scenarios. Furthermore, to mitigate the slow convergence associated with training multiple vehicle types and operating scenarios concurrently, hard sample mining method is used to optimize the presentation of random operating scenarios during training. This entails recording poorly performing conditions during training and prioritizing the training of simpler conditions before advancing to more challenging ones, thereby enhancing training efficiency through a scientifically informed approach. Additionally, we validate the proposed EMS on a simulated vehicle emulator. Results demonstrate a significant improvement in convergence efficiency, with respective enhancements of 40% in convergence efficiency while achieving comparable final performance metrics.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Control Engineering Practice
Control Engineering Practice 工程技术-工程:电子与电气
CiteScore
9.20
自引率
12.20%
发文量
183
审稿时长
44 days
期刊介绍: Control Engineering Practice strives to meet the needs of industrial practitioners and industrially related academics and researchers. It publishes papers which illustrate the direct application of control theory and its supporting tools in all possible areas of automation. As a result, the journal only contains papers which can be considered to have made significant contributions to the application of advanced control techniques. It is normally expected that practical results should be included, but where simulation only studies are available, it is necessary to demonstrate that the simulation model is representative of a genuine application. Strictly theoretical papers will find a more appropriate home in Control Engineering Practice''s sister publication, Automatica. It is also expected that papers are innovative with respect to the state of the art and are sufficiently detailed for a reader to be able to duplicate the main results of the paper (supplementary material, including datasets, tables, code and any relevant interactive material can be made available and downloaded from the website). The benefits of the presented methods must be made very clear and the new techniques must be compared and contrasted with results obtained using existing methods. Moreover, a thorough analysis of failures that may happen in the design process and implementation can also be part of the paper. The scope of Control Engineering Practice matches the activities of IFAC. Papers demonstrating the contribution of automation and control in improving the performance, quality, productivity, sustainability, resource and energy efficiency, and the manageability of systems and processes for the benefit of mankind and are relevant to industrial practitioners are most welcome.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信