为混合动力电动汽车开发深度 Q 学习能源管理系统

Q1 Engineering
Luigi Tresca, Luca Pulvirenti, Luciano Rolando, Federico Millo
{"title":"为混合动力电动汽车开发深度 Q 学习能源管理系统","authors":"Luigi Tresca,&nbsp;Luca Pulvirenti,&nbsp;Luciano Rolando,&nbsp;Federico Millo","doi":"10.1016/j.treng.2024.100241","DOIUrl":null,"url":null,"abstract":"<div><p>In recent years, Machine Learning (ML) techniques have gained increasing popularity in several fields thanks to their ability to find hidden and complex relationships between data. Their capabilities for solving complex optimization tasks have made them extremely attractive also for the design of the Energy Management System (EMS) of electrified vehicles. Among the plethora of existing techniques, Reinforcement Learning (RL) algorithms have unprecedented potential since they can self-learn by directly interacting with the external environment through a trial-and-error procedure. In this paper, a Deep <em>Q</em>-Learning (DQL) agent, which exploits Deep Neural Networks (DNNs) to map the state-action pair to its value, was trained to reduce the CO<sub>2</sub> emissions of a state-of-the-art diesel Plug-in Hybrid Electric Vehicle (PHEV) available on the European market. The proposed methodology was tested on a virtual test rig of the investigated vehicle while operating on a charge-sustaining logic. A sensitivity analysis was performed on the reward to test the capabilities of different penalty functions to improve the fuel economy while guaranteeing the battery charge sustainability. The potential of the proposed control strategy was firstly assessed on the Worldwide harmonized Light-duty vehicles Test Cycle (WLTC) and benchmarked against a Dynamic Programming (DP) optimization to evaluate each reward. Then the best agent was tested on a wide range of type-approval and Read Driving Emission (RDE) scenarios. The results show that the best-performing agent can reach performance close to the DP reference, with a limited gap (7 %) in terms of CO<sub>2</sub> emissions.</p></div>","PeriodicalId":34480,"journal":{"name":"Transportation Engineering","volume":"16 ","pages":"Article 100241"},"PeriodicalIF":0.0000,"publicationDate":"2024-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666691X24000162/pdfft?md5=05095b3ee43f0a811a8461a5fc684755&pid=1-s2.0-S2666691X24000162-main.pdf","citationCount":"0","resultStr":"{\"title\":\"Development of a deep Q-learning energy management system for a hybrid electric vehicle\",\"authors\":\"Luigi Tresca,&nbsp;Luca Pulvirenti,&nbsp;Luciano Rolando,&nbsp;Federico Millo\",\"doi\":\"10.1016/j.treng.2024.100241\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>In recent years, Machine Learning (ML) techniques have gained increasing popularity in several fields thanks to their ability to find hidden and complex relationships between data. Their capabilities for solving complex optimization tasks have made them extremely attractive also for the design of the Energy Management System (EMS) of electrified vehicles. Among the plethora of existing techniques, Reinforcement Learning (RL) algorithms have unprecedented potential since they can self-learn by directly interacting with the external environment through a trial-and-error procedure. In this paper, a Deep <em>Q</em>-Learning (DQL) agent, which exploits Deep Neural Networks (DNNs) to map the state-action pair to its value, was trained to reduce the CO<sub>2</sub> emissions of a state-of-the-art diesel Plug-in Hybrid Electric Vehicle (PHEV) available on the European market. The proposed methodology was tested on a virtual test rig of the investigated vehicle while operating on a charge-sustaining logic. A sensitivity analysis was performed on the reward to test the capabilities of different penalty functions to improve the fuel economy while guaranteeing the battery charge sustainability. The potential of the proposed control strategy was firstly assessed on the Worldwide harmonized Light-duty vehicles Test Cycle (WLTC) and benchmarked against a Dynamic Programming (DP) optimization to evaluate each reward. Then the best agent was tested on a wide range of type-approval and Read Driving Emission (RDE) scenarios. The results show that the best-performing agent can reach performance close to the DP reference, with a limited gap (7 %) in terms of CO<sub>2</sub> emissions.</p></div>\",\"PeriodicalId\":34480,\"journal\":{\"name\":\"Transportation Engineering\",\"volume\":\"16 \",\"pages\":\"Article 100241\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-03-07\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.sciencedirect.com/science/article/pii/S2666691X24000162/pdfft?md5=05095b3ee43f0a811a8461a5fc684755&pid=1-s2.0-S2666691X24000162-main.pdf\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Transportation Engineering\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S2666691X24000162\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"Engineering\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Transportation Engineering","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2666691X24000162","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"Engineering","Score":null,"Total":0}
引用次数: 0

摘要

近年来,机器学习(ML)技术凭借其发现数据间隐藏的复杂关系的能力,在多个领域越来越受欢迎。机器学习技术能够解决复杂的优化任务,因此对电动汽车能源管理系统(EMS)的设计也极具吸引力。在现有的大量技术中,强化学习(RL)算法具有前所未有的潜力,因为它们可以通过试错程序与外部环境直接交互,从而实现自我学习。本文训练了一个深度 Q 学习(DQL)代理,利用深度神经网络(DNN)将状态-行动对映射到其值,以减少欧洲市场上最先进的柴油插电式混合动力电动汽车(PHEV)的二氧化碳排放量。所提出的方法在所调查车辆的虚拟测试平台上进行了测试,测试过程中采用了充电维持逻辑。对奖励进行了敏感性分析,以测试不同惩罚函数在保证电池充电可持续性的同时提高燃油经济性的能力。首先在全球统一轻型车辆测试循环(WLTC)上评估了所提控制策略的潜力,并以动态编程(DP)优化为基准来评估每种奖励。然后,在各种类型批准和读取驾驶排放(RDE)情况下对最佳代理进行了测试。结果表明,表现最佳的代理可以达到接近 DP 参考值的性能,在二氧化碳排放方面的差距有限(7%)。
本文章由计算机程序翻译,如有差异,请以英文原文为准。

Development of a deep Q-learning energy management system for a hybrid electric vehicle

Development of a deep Q-learning energy management system for a hybrid electric vehicle

In recent years, Machine Learning (ML) techniques have gained increasing popularity in several fields thanks to their ability to find hidden and complex relationships between data. Their capabilities for solving complex optimization tasks have made them extremely attractive also for the design of the Energy Management System (EMS) of electrified vehicles. Among the plethora of existing techniques, Reinforcement Learning (RL) algorithms have unprecedented potential since they can self-learn by directly interacting with the external environment through a trial-and-error procedure. In this paper, a Deep Q-Learning (DQL) agent, which exploits Deep Neural Networks (DNNs) to map the state-action pair to its value, was trained to reduce the CO2 emissions of a state-of-the-art diesel Plug-in Hybrid Electric Vehicle (PHEV) available on the European market. The proposed methodology was tested on a virtual test rig of the investigated vehicle while operating on a charge-sustaining logic. A sensitivity analysis was performed on the reward to test the capabilities of different penalty functions to improve the fuel economy while guaranteeing the battery charge sustainability. The potential of the proposed control strategy was firstly assessed on the Worldwide harmonized Light-duty vehicles Test Cycle (WLTC) and benchmarked against a Dynamic Programming (DP) optimization to evaluate each reward. Then the best agent was tested on a wide range of type-approval and Read Driving Emission (RDE) scenarios. The results show that the best-performing agent can reach performance close to the DP reference, with a limited gap (7 %) in terms of CO2 emissions.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Transportation Engineering
Transportation Engineering Engineering-Automotive Engineering
CiteScore
8.10
自引率
0.00%
发文量
46
审稿时长
90 days
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信