Wencong Xiao, Tao Yu, Zhiwei Chen, Zhenning Pan, Yufeng Wu, Qianjin Liu
{"title":"Data augmented offline deep reinforcement learning for stochastic dynamic power dispatch","authors":"Wencong Xiao, Tao Yu, Zhiwei Chen, Zhenning Pan, Yufeng Wu, Qianjin Liu","doi":"10.1016/j.ijepes.2025.110747","DOIUrl":null,"url":null,"abstract":"<div><div>Operating a power system under uncertainty while ensuring both economic efficiency and system security can be formulated as a stochastic dynamic economic dispatch (DED) problem. Deep reinforcement learning (DRL) offers a promising solution by learning dispatch policies through extensive system interaction and trial-and-error. However, the effectiveness of DRL is constrained by two key limitations: the high cost of real-time system interactions and the limited diversity of historical scenarios. To address these challenges, this paper proposes an offline deep reinforcement learning (ODRL) framework tailored for power system dispatch. First, a conditional generative adversarial network (CGAN) is employed to augment historical scenarios, thereby improving data diversity. The resulting training dataset combines both real and synthetically generated scenarios. Second, a conservative offline soft actor-critic (COSAC) algorithm is developed to learn dispatch policies directly from this hybrid offline dataset, eliminating the need for online interaction. Experimental results demonstrate that the proposed approach significantly outperforms both conventional DRL and existing offline learning methods in terms of reliability and economic performance.</div></div>","PeriodicalId":50326,"journal":{"name":"International Journal of Electrical Power & Energy Systems","volume":"169 ","pages":"Article 110747"},"PeriodicalIF":5.0000,"publicationDate":"2025-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Electrical Power & Energy Systems","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0142061525002984","RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0
Abstract
Operating a power system under uncertainty while ensuring both economic efficiency and system security can be formulated as a stochastic dynamic economic dispatch (DED) problem. Deep reinforcement learning (DRL) offers a promising solution by learning dispatch policies through extensive system interaction and trial-and-error. However, the effectiveness of DRL is constrained by two key limitations: the high cost of real-time system interactions and the limited diversity of historical scenarios. To address these challenges, this paper proposes an offline deep reinforcement learning (ODRL) framework tailored for power system dispatch. First, a conditional generative adversarial network (CGAN) is employed to augment historical scenarios, thereby improving data diversity. The resulting training dataset combines both real and synthetically generated scenarios. Second, a conservative offline soft actor-critic (COSAC) algorithm is developed to learn dispatch policies directly from this hybrid offline dataset, eliminating the need for online interaction. Experimental results demonstrate that the proposed approach significantly outperforms both conventional DRL and existing offline learning methods in terms of reliability and economic performance.
期刊介绍:
The journal covers theoretical developments in electrical power and energy systems and their applications. The coverage embraces: generation and network planning; reliability; long and short term operation; expert systems; neural networks; object oriented systems; system control centres; database and information systems; stock and parameter estimation; system security and adequacy; network theory, modelling and computation; small and large system dynamics; dynamic model identification; on-line control including load and switching control; protection; distribution systems; energy economics; impact of non-conventional systems; and man-machine interfaces.
As well as original research papers, the journal publishes short contributions, book reviews and conference reports. All papers are peer-reviewed by at least two referees.