Data augmented offline deep reinforcement learning for stochastic dynamic power dispatch

IF 5 2区 工程技术 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC
Wencong Xiao, Tao Yu, Zhiwei Chen, Zhenning Pan, Yufeng Wu, Qianjin Liu
{"title":"Data augmented offline deep reinforcement learning for stochastic dynamic power dispatch","authors":"Wencong Xiao,&nbsp;Tao Yu,&nbsp;Zhiwei Chen,&nbsp;Zhenning Pan,&nbsp;Yufeng Wu,&nbsp;Qianjin Liu","doi":"10.1016/j.ijepes.2025.110747","DOIUrl":null,"url":null,"abstract":"<div><div>Operating a power system under uncertainty while ensuring both economic efficiency and system security can be formulated as a stochastic dynamic economic dispatch (DED) problem. Deep reinforcement learning (DRL) offers a promising solution by learning dispatch policies through extensive system interaction and trial-and-error. However, the effectiveness of DRL is constrained by two key limitations: the high cost of real-time system interactions and the limited diversity of historical scenarios. To address these challenges, this paper proposes an offline deep reinforcement learning (ODRL) framework tailored for power system dispatch. First, a conditional generative adversarial network (CGAN) is employed to augment historical scenarios, thereby improving data diversity. The resulting training dataset combines both real and synthetically generated scenarios. Second, a conservative offline soft actor-critic (COSAC) algorithm is developed to learn dispatch policies directly from this hybrid offline dataset, eliminating the need for online interaction. Experimental results demonstrate that the proposed approach significantly outperforms both conventional DRL and existing offline learning methods in terms of reliability and economic performance.</div></div>","PeriodicalId":50326,"journal":{"name":"International Journal of Electrical Power & Energy Systems","volume":"169 ","pages":"Article 110747"},"PeriodicalIF":5.0000,"publicationDate":"2025-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Electrical Power & Energy Systems","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0142061525002984","RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0

Abstract

Operating a power system under uncertainty while ensuring both economic efficiency and system security can be formulated as a stochastic dynamic economic dispatch (DED) problem. Deep reinforcement learning (DRL) offers a promising solution by learning dispatch policies through extensive system interaction and trial-and-error. However, the effectiveness of DRL is constrained by two key limitations: the high cost of real-time system interactions and the limited diversity of historical scenarios. To address these challenges, this paper proposes an offline deep reinforcement learning (ODRL) framework tailored for power system dispatch. First, a conditional generative adversarial network (CGAN) is employed to augment historical scenarios, thereby improving data diversity. The resulting training dataset combines both real and synthetically generated scenarios. Second, a conservative offline soft actor-critic (COSAC) algorithm is developed to learn dispatch policies directly from this hybrid offline dataset, eliminating the need for online interaction. Experimental results demonstrate that the proposed approach significantly outperforms both conventional DRL and existing offline learning methods in terms of reliability and economic performance.
随机动态电力调度的数据增强离线深度强化学习
在不确定条件下运行电力系统,同时保证系统的经济效率和安全,可以将其表述为一个随机动态经济调度问题。深度强化学习(DRL)通过广泛的系统交互和试错来学习调度策略,提供了一个很有前途的解决方案。然而,DRL的有效性受到两个关键限制的制约:实时系统交互的高成本和历史场景的有限多样性。为了解决这些挑战,本文提出了一种适合电力系统调度的离线深度强化学习(ODRL)框架。首先,采用条件生成对抗网络(CGAN)增强历史场景,从而提高数据多样性。生成的训练数据集结合了真实场景和合成场景。其次,开发了一种保守的离线软行为者批评家(COSAC)算法,直接从混合离线数据集中学习调度策略,消除了在线交互的需要。实验结果表明,该方法在可靠性和经济性方面明显优于传统的DRL和现有的离线学习方法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
International Journal of Electrical Power & Energy Systems
International Journal of Electrical Power & Energy Systems 工程技术-工程:电子与电气
CiteScore
12.10
自引率
17.30%
发文量
1022
审稿时长
51 days
期刊介绍: The journal covers theoretical developments in electrical power and energy systems and their applications. The coverage embraces: generation and network planning; reliability; long and short term operation; expert systems; neural networks; object oriented systems; system control centres; database and information systems; stock and parameter estimation; system security and adequacy; network theory, modelling and computation; small and large system dynamics; dynamic model identification; on-line control including load and switching control; protection; distribution systems; energy economics; impact of non-conventional systems; and man-machine interfaces. As well as original research papers, the journal publishes short contributions, book reviews and conference reports. All papers are peer-reviewed by at least two referees.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信