不确定条件下电动汽车车队充电管理控制策略比较

IF 9.6 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Zhewei Zhang , Rémy Rigo-Mariani , Nouredine Hadjsaid
{"title":"不确定条件下电动汽车车队充电管理控制策略比较","authors":"Zhewei Zhang ,&nbsp;Rémy Rigo-Mariani ,&nbsp;Nouredine Hadjsaid","doi":"10.1016/j.egyai.2025.100522","DOIUrl":null,"url":null,"abstract":"<div><div>The growing penetration of Electric Vehicles (EVs) in transportation brings challenges to power distribution systems due to uncertain usage patterns and increased peak loads. Effective EV fleet charging management strategies are needed to minimize network impacts, such as peak charging power. While existing studies have addressed uncertainties in future arrivals, they often overlook the uncertainties in user-provided inputs of current ongoing charging EVs, such as estimated departure time and energy demand. This paper analyzes the impact of these uncertainties and evaluates three management strategies: a baseline Model Predictive Control (MPC), a data-hybrid MPC, and a fully data-driven Deep Reinforcement Learning (DRL) approach. For data-hybrid MPC, we adopted a diffusion model to handle user input uncertainties and a Gaussian Mixture Model for modeling arrival/departure scenarios. Additionally, the DRL method is based on a Partially Observable Markov Decision Process (POMDP) to manage uncertainty and employs a Convolutional Neural Network (CNN) for feature extraction. Robustness tests under different user uncertainty levels show that the data hybrid MPC performs better on the baseline MPC by 20 %, while the DRL-based method achieves around 10 % improvement.</div></div>","PeriodicalId":34138,"journal":{"name":"Energy and AI","volume":"21 ","pages":"Article 100522"},"PeriodicalIF":9.6000,"publicationDate":"2025-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Comparative of control strategies on electrical vehicle fleet charging management strategies under uncertainties\",\"authors\":\"Zhewei Zhang ,&nbsp;Rémy Rigo-Mariani ,&nbsp;Nouredine Hadjsaid\",\"doi\":\"10.1016/j.egyai.2025.100522\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>The growing penetration of Electric Vehicles (EVs) in transportation brings challenges to power distribution systems due to uncertain usage patterns and increased peak loads. Effective EV fleet charging management strategies are needed to minimize network impacts, such as peak charging power. While existing studies have addressed uncertainties in future arrivals, they often overlook the uncertainties in user-provided inputs of current ongoing charging EVs, such as estimated departure time and energy demand. This paper analyzes the impact of these uncertainties and evaluates three management strategies: a baseline Model Predictive Control (MPC), a data-hybrid MPC, and a fully data-driven Deep Reinforcement Learning (DRL) approach. For data-hybrid MPC, we adopted a diffusion model to handle user input uncertainties and a Gaussian Mixture Model for modeling arrival/departure scenarios. Additionally, the DRL method is based on a Partially Observable Markov Decision Process (POMDP) to manage uncertainty and employs a Convolutional Neural Network (CNN) for feature extraction. Robustness tests under different user uncertainty levels show that the data hybrid MPC performs better on the baseline MPC by 20 %, while the DRL-based method achieves around 10 % improvement.</div></div>\",\"PeriodicalId\":34138,\"journal\":{\"name\":\"Energy and AI\",\"volume\":\"21 \",\"pages\":\"Article 100522\"},\"PeriodicalIF\":9.6000,\"publicationDate\":\"2025-05-07\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Energy and AI\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S2666546825000540\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Energy and AI","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2666546825000540","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

摘要

随着电动汽车在交通运输领域的日益普及,由于使用模式的不确定性和峰值负荷的增加,给配电系统带来了挑战。需要有效的电动汽车车队充电管理策略,以最大限度地减少网络影响,如充电功率峰值。虽然现有的研究已经解决了未来到达的不确定性,但它们往往忽略了当前正在充电的电动汽车用户提供输入的不确定性,例如估计的出发时间和能源需求。本文分析了这些不确定性的影响,并评估了三种管理策略:基线模型预测控制(MPC),数据混合MPC和完全数据驱动的深度强化学习(DRL)方法。对于数据混合MPC,我们采用扩散模型来处理用户输入的不确定性,采用高斯混合模型来建模到达/离开场景。此外,DRL方法基于部分可观察马尔可夫决策过程(POMDP)来管理不确定性,并采用卷积神经网络(CNN)进行特征提取。不同用户不确定性水平下的鲁棒性测试表明,数据混合MPC在基准MPC上的性能提高了20%,而基于drl的方法提高了10%左右。
本文章由计算机程序翻译,如有差异,请以英文原文为准。

Comparative of control strategies on electrical vehicle fleet charging management strategies under uncertainties

Comparative of control strategies on electrical vehicle fleet charging management strategies under uncertainties
The growing penetration of Electric Vehicles (EVs) in transportation brings challenges to power distribution systems due to uncertain usage patterns and increased peak loads. Effective EV fleet charging management strategies are needed to minimize network impacts, such as peak charging power. While existing studies have addressed uncertainties in future arrivals, they often overlook the uncertainties in user-provided inputs of current ongoing charging EVs, such as estimated departure time and energy demand. This paper analyzes the impact of these uncertainties and evaluates three management strategies: a baseline Model Predictive Control (MPC), a data-hybrid MPC, and a fully data-driven Deep Reinforcement Learning (DRL) approach. For data-hybrid MPC, we adopted a diffusion model to handle user input uncertainties and a Gaussian Mixture Model for modeling arrival/departure scenarios. Additionally, the DRL method is based on a Partially Observable Markov Decision Process (POMDP) to manage uncertainty and employs a Convolutional Neural Network (CNN) for feature extraction. Robustness tests under different user uncertainty levels show that the data hybrid MPC performs better on the baseline MPC by 20 %, while the DRL-based method achieves around 10 % improvement.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Energy and AI
Energy and AI Engineering-Engineering (miscellaneous)
CiteScore
16.50
自引率
0.00%
发文量
64
审稿时长
56 days
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信