Francesco Liberati, Mohab M.H. Atanasious, Emanuele De Santis, Alessandro Di Giorgio
{"title":"A hybrid model predictive control-deep reinforcement learning algorithm with application to plug-in electric vehicles smart charging","authors":"Francesco Liberati, Mohab M.H. Atanasious, Emanuele De Santis, Alessandro Di Giorgio","doi":"10.1016/j.segan.2025.101963","DOIUrl":null,"url":null,"abstract":"<div><div>This paper focuses on a novel use of deep reinforcement learning (RL) to optimally tune in real-time a model predictive control (MPC) smart charging algorithm for plug-in electric vehicles (PEVs). The coefficients of the terminal cost function of the MPC algorithm are updated online by a neural network, which is trained offline to maximize the control performance (linked to the satisfaction of the users’ charging preferences and the tracking of a power reference profile, at PEV fleet level). This approach is different and more flexible compared to most of the other approaches in the literature, which instead use deep RL to fix offline the MPC parametrization. The proposed method allows one to select a shorter MPC control window (compared to standard MPC) and/or a shorter sampling time, while improving the control performance. Simulations are presented to validate the approach: the proposed MPC-RL controller improves control performance by an average of 4.3 % compared to classic MPC, while having a lower computing time.</div></div>","PeriodicalId":56142,"journal":{"name":"Sustainable Energy Grids & Networks","volume":"44 ","pages":"Article 101963"},"PeriodicalIF":5.6000,"publicationDate":"2025-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Sustainable Energy Grids & Networks","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2352467725003455","RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ENERGY & FUELS","Score":null,"Total":0}
引用次数: 0
Abstract
This paper focuses on a novel use of deep reinforcement learning (RL) to optimally tune in real-time a model predictive control (MPC) smart charging algorithm for plug-in electric vehicles (PEVs). The coefficients of the terminal cost function of the MPC algorithm are updated online by a neural network, which is trained offline to maximize the control performance (linked to the satisfaction of the users’ charging preferences and the tracking of a power reference profile, at PEV fleet level). This approach is different and more flexible compared to most of the other approaches in the literature, which instead use deep RL to fix offline the MPC parametrization. The proposed method allows one to select a shorter MPC control window (compared to standard MPC) and/or a shorter sampling time, while improving the control performance. Simulations are presented to validate the approach: the proposed MPC-RL controller improves control performance by an average of 4.3 % compared to classic MPC, while having a lower computing time.
期刊介绍:
Sustainable Energy, Grids and Networks (SEGAN)is an international peer-reviewed publication for theoretical and applied research dealing with energy, information grids and power networks, including smart grids from super to micro grid scales. SEGAN welcomes papers describing fundamental advances in mathematical, statistical or computational methods with application to power and energy systems, as well as papers on applications, computation and modeling in the areas of electrical and energy systems with coupled information and communication technologies.