大型电力系统暂态安全约束下的快速收敛深度强化学习优化调度

IF 6.1 1区 工程技术 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC
Tannan Xiao;Ying Chen;Han Diao;Shaowei Huang;Chen Shen
{"title":"大型电力系统暂态安全约束下的快速收敛深度强化学习优化调度","authors":"Tannan Xiao;Ying Chen;Han Diao;Shaowei Huang;Chen Shen","doi":"10.35833/MPCE.2024.000624","DOIUrl":null,"url":null,"abstract":"Power system optimal dispatch with transient security constraints is commonly represented as transient security-constrained optimal power flow (TSC-OPF). Deep reinforcement learning (DRL)-based TSC-OPF trains efficient decision-making agents that are adaptable to various scenarios and provide solution results quickly. However, due to the high dimensionality of the state space and action spaces, as well as the non-smoothness of dynamic constraints, existing DRL-based TSC-OPF solution methods face a significant challenge of the sparse reward problem. To address this issue, a fast-converging DRL method for optimal dispatch of large-scale power systems under transient security constraints is proposed in this paper. The Markov decision process (MDP) modeling of TSC-OPF is improved by reducing the observation space and smoothing the reward design, thus facilitating agent training. An improved deep deterministic policy gradient algorithm with curriculum learning, parallel exploration, and ensemble decision-making (DDPG-CL-PE-ED) is introduced to drastically enhance the efficiency of agent training and the accuracy of decision-making. The effectiveness, efficiency, and accuracy of the proposed method are demonstrated through experiments in the IEEE 39-bus system and a practical 710-bus regional power grid. The source code of the proposed method is made public on GitHub.","PeriodicalId":51326,"journal":{"name":"Journal of Modern Power Systems and Clean Energy","volume":"13 5","pages":"1495-1506"},"PeriodicalIF":6.1000,"publicationDate":"2025-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10944544","citationCount":"0","resultStr":"{\"title\":\"Fast-Converging Deep Reinforcement Learning for Optimal Dispatch of Large-Scale Power Systems Under Transient Security Constraints\",\"authors\":\"Tannan Xiao;Ying Chen;Han Diao;Shaowei Huang;Chen Shen\",\"doi\":\"10.35833/MPCE.2024.000624\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Power system optimal dispatch with transient security constraints is commonly represented as transient security-constrained optimal power flow (TSC-OPF). Deep reinforcement learning (DRL)-based TSC-OPF trains efficient decision-making agents that are adaptable to various scenarios and provide solution results quickly. However, due to the high dimensionality of the state space and action spaces, as well as the non-smoothness of dynamic constraints, existing DRL-based TSC-OPF solution methods face a significant challenge of the sparse reward problem. To address this issue, a fast-converging DRL method for optimal dispatch of large-scale power systems under transient security constraints is proposed in this paper. The Markov decision process (MDP) modeling of TSC-OPF is improved by reducing the observation space and smoothing the reward design, thus facilitating agent training. An improved deep deterministic policy gradient algorithm with curriculum learning, parallel exploration, and ensemble decision-making (DDPG-CL-PE-ED) is introduced to drastically enhance the efficiency of agent training and the accuracy of decision-making. The effectiveness, efficiency, and accuracy of the proposed method are demonstrated through experiments in the IEEE 39-bus system and a practical 710-bus regional power grid. The source code of the proposed method is made public on GitHub.\",\"PeriodicalId\":51326,\"journal\":{\"name\":\"Journal of Modern Power Systems and Clean Energy\",\"volume\":\"13 5\",\"pages\":\"1495-1506\"},\"PeriodicalIF\":6.1000,\"publicationDate\":\"2025-03-27\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10944544\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Modern Power Systems and Clean Energy\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10944544/\",\"RegionNum\":1,\"RegionCategory\":\"工程技术\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"ENGINEERING, ELECTRICAL & ELECTRONIC\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Modern Power Systems and Clean Energy","FirstCategoryId":"5","ListUrlMain":"https://ieeexplore.ieee.org/document/10944544/","RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0

摘要

具有暂态安全约束的电力系统最优调度通常表示为暂态安全约束最优潮流(TSC-OPF)。基于深度强化学习(DRL)的TSC-OPF训练高效的决策代理,能够适应各种场景并快速提供解决方案结果。然而,由于状态空间和动作空间的高维性以及动态约束的非光滑性,现有的基于drl的TSC-OPF求解方法面临着稀疏奖励问题的重大挑战。为了解决这一问题,本文提出了一种求解暂态安全约束下大规模电力系统最优调度的快速收敛DRL方法。通过减小观察空间和平滑奖励设计,改进TSC-OPF的马尔可夫决策过程(MDP)建模,方便智能体训练。提出了一种基于课程学习、并行探索和集成决策的改进深度确定性策略梯度算法(DDPG-CL-PE-ED),大大提高了智能体训练的效率和决策的准确性。通过在IEEE 39总线系统和一个实际的710总线区域电网上的实验,验证了该方法的有效性、高效性和准确性。提议的方法的源代码在GitHub上公开。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Fast-Converging Deep Reinforcement Learning for Optimal Dispatch of Large-Scale Power Systems Under Transient Security Constraints
Power system optimal dispatch with transient security constraints is commonly represented as transient security-constrained optimal power flow (TSC-OPF). Deep reinforcement learning (DRL)-based TSC-OPF trains efficient decision-making agents that are adaptable to various scenarios and provide solution results quickly. However, due to the high dimensionality of the state space and action spaces, as well as the non-smoothness of dynamic constraints, existing DRL-based TSC-OPF solution methods face a significant challenge of the sparse reward problem. To address this issue, a fast-converging DRL method for optimal dispatch of large-scale power systems under transient security constraints is proposed in this paper. The Markov decision process (MDP) modeling of TSC-OPF is improved by reducing the observation space and smoothing the reward design, thus facilitating agent training. An improved deep deterministic policy gradient algorithm with curriculum learning, parallel exploration, and ensemble decision-making (DDPG-CL-PE-ED) is introduced to drastically enhance the efficiency of agent training and the accuracy of decision-making. The effectiveness, efficiency, and accuracy of the proposed method are demonstrated through experiments in the IEEE 39-bus system and a practical 710-bus regional power grid. The source code of the proposed method is made public on GitHub.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Journal of Modern Power Systems and Clean Energy
Journal of Modern Power Systems and Clean Energy ENGINEERING, ELECTRICAL & ELECTRONIC-
CiteScore
12.30
自引率
14.30%
发文量
97
审稿时长
13 weeks
期刊介绍: Journal of Modern Power Systems and Clean Energy (MPCE), commencing from June, 2013, is a newly established, peer-reviewed and quarterly published journal in English. It is the first international power engineering journal originated in mainland China. MPCE publishes original papers, short letters and review articles in the field of modern power systems with focus on smart grid technology and renewable energy integration, etc.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信