Refining the black-box AI optimization with CMA-ES and ORM in the energy management for fuel cell electric vehicles

IF 9.9 1区 工程技术 Q1 ENERGY & FUELS
Jincheng Hu, Jihao Li, Ming Liu, Yanjun Huang, Quan Zhou, Yonggang Liu, Zheng Chen, Jun Yang, Jingjing Jiang, Yuanjian Zhang
{"title":"Refining the black-box AI optimization with CMA-ES and ORM in the energy management for fuel cell electric vehicles","authors":"Jincheng Hu, Jihao Li, Ming Liu, Yanjun Huang, Quan Zhou, Yonggang Liu, Zheng Chen, Jun Yang, Jingjing Jiang, Yuanjian Zhang","doi":"10.1016/j.enconman.2024.119399","DOIUrl":null,"url":null,"abstract":"Fuel cell electric vehicles (FCEVs) represent a significant advancement in zero-emission green mobility. By integrating deep reinforcement learning (DRL) for multi-objective energy management strategies, they unlock substantial potential for efficient and sustainable driving. However, the black-box nature of DRL and the challenges in designing multi-objective reward functions pose optimization difficulties. In this paper, we propose to an adaptive evolutionary framework to enhance DRL-based energy management strategies (EMS) by employing the covariance matrix adaptation evolutionary strategies (CMA-ES) for effective black-box optimization. By implementing an opponent reference mechanism, a self-balanced reward function for multiple optimization targets, including vehicle dynamics, powertrain economy, and more, is constructed in the proposed approach. This allows the system to automatically weigh sub-optimization targets and learn superior energy management behaviour via numerous simulation trajectories. The processor-in-the-loop (PIL) test results demonstrate that the proposed solution responds to adaptive adjustment conditions without violating any safety constraints, reduces energy consumption by at least 18.4%, and greatly improves energy utilization efficiency and safety. It exhibits promising optimality in complex energy management problems and robustness to varying velocity profiles, delivering a significant performance advantage over baseline approaches.","PeriodicalId":11664,"journal":{"name":"Energy Conversion and Management","volume":"19 1","pages":""},"PeriodicalIF":9.9000,"publicationDate":"2024-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Energy Conversion and Management","FirstCategoryId":"5","ListUrlMain":"https://doi.org/10.1016/j.enconman.2024.119399","RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENERGY & FUELS","Score":null,"Total":0}
引用次数: 0

Abstract

Fuel cell electric vehicles (FCEVs) represent a significant advancement in zero-emission green mobility. By integrating deep reinforcement learning (DRL) for multi-objective energy management strategies, they unlock substantial potential for efficient and sustainable driving. However, the black-box nature of DRL and the challenges in designing multi-objective reward functions pose optimization difficulties. In this paper, we propose to an adaptive evolutionary framework to enhance DRL-based energy management strategies (EMS) by employing the covariance matrix adaptation evolutionary strategies (CMA-ES) for effective black-box optimization. By implementing an opponent reference mechanism, a self-balanced reward function for multiple optimization targets, including vehicle dynamics, powertrain economy, and more, is constructed in the proposed approach. This allows the system to automatically weigh sub-optimization targets and learn superior energy management behaviour via numerous simulation trajectories. The processor-in-the-loop (PIL) test results demonstrate that the proposed solution responds to adaptive adjustment conditions without violating any safety constraints, reduces energy consumption by at least 18.4%, and greatly improves energy utilization efficiency and safety. It exhibits promising optimality in complex energy management problems and robustness to varying velocity profiles, delivering a significant performance advantage over baseline approaches.
基于CMA-ES和ORM的燃料电池电动汽车能量管理黑箱AI优化方法的细化
燃料电池电动汽车(fcev)代表了零排放绿色交通的重大进步。通过将深度强化学习(DRL)集成到多目标能量管理策略中,他们释放了高效和可持续驾驶的巨大潜力。然而,DRL的黑盒特性和设计多目标奖励函数的挑战给优化带来了困难。本文提出了一种自适应进化框架,利用协方差矩阵自适应进化策略(CMA-ES)进行有效的黑盒优化,以增强基于drl的能量管理策略(EMS)。该方法通过引入对手参考机制,构建了车辆动力学、动力系统经济性等多个优化目标的自平衡奖励函数。这使得系统能够自动权衡次优化目标,并通过大量模拟轨迹学习卓越的能量管理行为。在环处理器(PIL)测试结果表明,该方案在不违反任何安全约束的情况下响应自适应调节条件,能耗降低至少18.4%,大大提高了能源利用效率和安全性。它在复杂的能量管理问题中表现出有希望的最优性和对不同速度剖面的鲁棒性,提供了比基线方法显著的性能优势。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Energy Conversion and Management
Energy Conversion and Management 工程技术-力学
CiteScore
19.00
自引率
11.50%
发文量
1304
审稿时长
17 days
期刊介绍: The journal Energy Conversion and Management provides a forum for publishing original contributions and comprehensive technical review articles of interdisciplinary and original research on all important energy topics. The topics considered include energy generation, utilization, conversion, storage, transmission, conservation, management and sustainability. These topics typically involve various types of energy such as mechanical, thermal, nuclear, chemical, electromagnetic, magnetic and electric. These energy types cover all known energy resources, including renewable resources (e.g., solar, bio, hydro, wind, geothermal and ocean energy), fossil fuels and nuclear resources.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信