Distributed and Multi-Agent Reinforcement Learning Framework for Optimal Electric Vehicle Charging Scheduling

IF 3 4区 工程技术 Q3 ENERGY & FUELS
Energies Pub Date : 2024-07-26 DOI:10.3390/en17153694
Christos D. Korkas, Christos Tsaknakis, Athanasios Ch. Kapoutsis, Elias B. Kosmatopoulos
{"title":"Distributed and Multi-Agent Reinforcement Learning Framework for Optimal Electric Vehicle Charging Scheduling","authors":"Christos D. Korkas, Christos Tsaknakis, Athanasios Ch. Kapoutsis, Elias B. Kosmatopoulos","doi":"10.3390/en17153694","DOIUrl":null,"url":null,"abstract":"The increasing number of electric vehicles (EVs) necessitates the installation of more charging stations. The challenge of managing these grid-connected charging stations leads to a multi-objective optimal control problem where station profitability, user preferences, grid requirements and stability should be optimized. However, it is challenging to determine the optimal charging/discharging EV schedule, since the controller should exploit fluctuations in the electricity prices, available renewable resources and available stored energy of other vehicles and cope with the uncertainty of EV arrival/departure scheduling. In addition, the growing number of connected vehicles results in a complex state and action vectors, making it difficult for centralized and single-agent controllers to handle the problem. In this paper, we propose a novel Multi-Agent and distributed Reinforcement Learning (MARL) framework that tackles the challenges mentioned above, producing controllers that achieve high performance levels under diverse conditions. In the proposed distributed framework, each charging spot makes its own charging/discharging decisions toward a cumulative cost reduction without sharing any type of private information, such as the arrival/departure time of a vehicle and its state of charge, addressing the problem of cost minimization and user satisfaction. The framework significantly improves the scalability and sample efficiency of the underlying Deep Deterministic Policy Gradient (DDPG) algorithm. Extensive numerical studies and simulations demonstrate the efficacy of the proposed approach compared with Rule-Based Controllers (RBCs) and well-established, state-of-the-art centralized RL (Reinforcement Learning) algorithms, offering performance improvements of up to 25% and 20% in reducing the energy cost and increasing user satisfaction, respectively.","PeriodicalId":11557,"journal":{"name":"Energies","volume":null,"pages":null},"PeriodicalIF":3.0000,"publicationDate":"2024-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Energies","FirstCategoryId":"5","ListUrlMain":"https://doi.org/10.3390/en17153694","RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"ENERGY & FUELS","Score":null,"Total":0}
引用次数: 0

Abstract

The increasing number of electric vehicles (EVs) necessitates the installation of more charging stations. The challenge of managing these grid-connected charging stations leads to a multi-objective optimal control problem where station profitability, user preferences, grid requirements and stability should be optimized. However, it is challenging to determine the optimal charging/discharging EV schedule, since the controller should exploit fluctuations in the electricity prices, available renewable resources and available stored energy of other vehicles and cope with the uncertainty of EV arrival/departure scheduling. In addition, the growing number of connected vehicles results in a complex state and action vectors, making it difficult for centralized and single-agent controllers to handle the problem. In this paper, we propose a novel Multi-Agent and distributed Reinforcement Learning (MARL) framework that tackles the challenges mentioned above, producing controllers that achieve high performance levels under diverse conditions. In the proposed distributed framework, each charging spot makes its own charging/discharging decisions toward a cumulative cost reduction without sharing any type of private information, such as the arrival/departure time of a vehicle and its state of charge, addressing the problem of cost minimization and user satisfaction. The framework significantly improves the scalability and sample efficiency of the underlying Deep Deterministic Policy Gradient (DDPG) algorithm. Extensive numerical studies and simulations demonstrate the efficacy of the proposed approach compared with Rule-Based Controllers (RBCs) and well-established, state-of-the-art centralized RL (Reinforcement Learning) algorithms, offering performance improvements of up to 25% and 20% in reducing the energy cost and increasing user satisfaction, respectively.
优化电动汽车充电调度的分布式多代理强化学习框架
随着电动汽车(EV)数量的不断增加,有必要安装更多的充电站。管理这些并网充电站所面临的挑战导致了一个多目标优化控制问题,在这个问题中,充电站的盈利能力、用户偏好、电网要求和稳定性都应得到优化。然而,要确定电动汽车的最佳充电/放电计划具有挑战性,因为控制器应利用电价波动、可用可再生资源和其他车辆的可用储能,并应对电动汽车到达/离开计划的不确定性。此外,联网车辆数量的不断增加导致了复杂的状态和行动向量,使得集中式和单一代理控制器难以处理该问题。在本文中,我们提出了一种新颖的多代理和分布式强化学习(MARL)框架,该框架可应对上述挑战,生成在各种条件下都能达到高性能水平的控制器。在所提出的分布式框架中,每个充电点都会做出自己的充电/放电决策,以降低累积成本,而不会共享任何类型的私人信息,如车辆的到达/离开时间及其充电状态,从而解决成本最小化和用户满意度的问题。该框架大大提高了底层深度确定性策略梯度(DDPG)算法的可扩展性和采样效率。广泛的数值研究和模拟证明,与基于规则的控制器(RBC)和成熟的、最先进的集中式 RL(强化学习)算法相比,所提出的方法非常有效,在降低能源成本和提高用户满意度方面的性能分别提高了 25% 和 20%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Energies
Energies ENERGY & FUELS-
CiteScore
6.20
自引率
21.90%
发文量
8045
审稿时长
1.9 months
期刊介绍: Energies (ISSN 1996-1073) is an open access journal of related scientific research, technology development and policy and management studies. It publishes reviews, regular research papers, and communications. Our aim is to encourage scientists to publish their experimental and theoretical results in as much detail as possible. There is no restriction on the length of the papers. The full experimental details must be provided so that the results can be reproduced.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信