Xiaoyun Deng , Yongdong Chen , Dongchuan Fan , Youbo Liu , Chao Ma
{"title":"GRU-integrated constrained soft actor-critic learning enabled fully distributed scheduling strategy for residential virtual power plant","authors":"Xiaoyun Deng , Yongdong Chen , Dongchuan Fan , Youbo Liu , Chao Ma","doi":"10.1016/j.gloei.2024.04.001","DOIUrl":null,"url":null,"abstract":"<div><p>In this study, a novel residential virtual power plant (RVPP) scheduling method that leverages a gate recurrent unit (GRU)-integrated deep reinforcement learning (DRL) algorithm is proposed. In the proposed scheme, the GRU- integrated DRL algorithm guides the RVPP to participate effectively in both the day-ahead and real-time markets, lowering the electricity purchase costs and consumption risks for end-users. The Lagrangian relaxation technique is introduced to transform the constrained Markov decision process (CMDP) into an unconstrained optimization problem, which guarantees that the constraints are strictly satisfied without determining the penalty coefficients. Furthermore, to enhance the scalability of the constrained soft actor-critic (CSAC)-based RVPP scheduling approach, a fully distributed scheduling architecture was designed to enable plug-and-play in the residential distributed energy resources (RDER). Case studies performed on the constructed RVPP scenario validated the performance of the proposed methodology in enhancing the responsiveness of the RDER to power tariffs, balancing the supply and demand of the power grid, and ensuring customer comfort.</p></div>","PeriodicalId":36174,"journal":{"name":"Global Energy Interconnection","volume":"7 2","pages":"Pages 117-129"},"PeriodicalIF":1.9000,"publicationDate":"2024-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2096511724000227/pdf?md5=fee3a28af3e5d34ed52dd9cf0e3743dd&pid=1-s2.0-S2096511724000227-main.pdf","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Global Energy Interconnection","FirstCategoryId":"1087","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2096511724000227","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"ENERGY & FUELS","Score":null,"Total":0}
引用次数: 0
Abstract
In this study, a novel residential virtual power plant (RVPP) scheduling method that leverages a gate recurrent unit (GRU)-integrated deep reinforcement learning (DRL) algorithm is proposed. In the proposed scheme, the GRU- integrated DRL algorithm guides the RVPP to participate effectively in both the day-ahead and real-time markets, lowering the electricity purchase costs and consumption risks for end-users. The Lagrangian relaxation technique is introduced to transform the constrained Markov decision process (CMDP) into an unconstrained optimization problem, which guarantees that the constraints are strictly satisfied without determining the penalty coefficients. Furthermore, to enhance the scalability of the constrained soft actor-critic (CSAC)-based RVPP scheduling approach, a fully distributed scheduling architecture was designed to enable plug-and-play in the residential distributed energy resources (RDER). Case studies performed on the constructed RVPP scenario validated the performance of the proposed methodology in enhancing the responsiveness of the RDER to power tariffs, balancing the supply and demand of the power grid, and ensuring customer comfort.