{"title":"大型电力系统暂态安全约束下的快速收敛深度强化学习优化调度","authors":"Tannan Xiao;Ying Chen;Han Diao;Shaowei Huang;Chen Shen","doi":"10.35833/MPCE.2024.000624","DOIUrl":null,"url":null,"abstract":"Power system optimal dispatch with transient security constraints is commonly represented as transient security-constrained optimal power flow (TSC-OPF). Deep reinforcement learning (DRL)-based TSC-OPF trains efficient decision-making agents that are adaptable to various scenarios and provide solution results quickly. However, due to the high dimensionality of the state space and action spaces, as well as the non-smoothness of dynamic constraints, existing DRL-based TSC-OPF solution methods face a significant challenge of the sparse reward problem. To address this issue, a fast-converging DRL method for optimal dispatch of large-scale power systems under transient security constraints is proposed in this paper. The Markov decision process (MDP) modeling of TSC-OPF is improved by reducing the observation space and smoothing the reward design, thus facilitating agent training. An improved deep deterministic policy gradient algorithm with curriculum learning, parallel exploration, and ensemble decision-making (DDPG-CL-PE-ED) is introduced to drastically enhance the efficiency of agent training and the accuracy of decision-making. The effectiveness, efficiency, and accuracy of the proposed method are demonstrated through experiments in the IEEE 39-bus system and a practical 710-bus regional power grid. The source code of the proposed method is made public on GitHub.","PeriodicalId":51326,"journal":{"name":"Journal of Modern Power Systems and Clean Energy","volume":"13 5","pages":"1495-1506"},"PeriodicalIF":6.1000,"publicationDate":"2025-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10944544","citationCount":"0","resultStr":"{\"title\":\"Fast-Converging Deep Reinforcement Learning for Optimal Dispatch of Large-Scale Power Systems Under Transient Security Constraints\",\"authors\":\"Tannan Xiao;Ying Chen;Han Diao;Shaowei Huang;Chen Shen\",\"doi\":\"10.35833/MPCE.2024.000624\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Power system optimal dispatch with transient security constraints is commonly represented as transient security-constrained optimal power flow (TSC-OPF). Deep reinforcement learning (DRL)-based TSC-OPF trains efficient decision-making agents that are adaptable to various scenarios and provide solution results quickly. However, due to the high dimensionality of the state space and action spaces, as well as the non-smoothness of dynamic constraints, existing DRL-based TSC-OPF solution methods face a significant challenge of the sparse reward problem. To address this issue, a fast-converging DRL method for optimal dispatch of large-scale power systems under transient security constraints is proposed in this paper. The Markov decision process (MDP) modeling of TSC-OPF is improved by reducing the observation space and smoothing the reward design, thus facilitating agent training. An improved deep deterministic policy gradient algorithm with curriculum learning, parallel exploration, and ensemble decision-making (DDPG-CL-PE-ED) is introduced to drastically enhance the efficiency of agent training and the accuracy of decision-making. The effectiveness, efficiency, and accuracy of the proposed method are demonstrated through experiments in the IEEE 39-bus system and a practical 710-bus regional power grid. The source code of the proposed method is made public on GitHub.\",\"PeriodicalId\":51326,\"journal\":{\"name\":\"Journal of Modern Power Systems and Clean Energy\",\"volume\":\"13 5\",\"pages\":\"1495-1506\"},\"PeriodicalIF\":6.1000,\"publicationDate\":\"2025-03-27\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10944544\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Modern Power Systems and Clean Energy\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10944544/\",\"RegionNum\":1,\"RegionCategory\":\"工程技术\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"ENGINEERING, ELECTRICAL & ELECTRONIC\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Modern Power Systems and Clean Energy","FirstCategoryId":"5","ListUrlMain":"https://ieeexplore.ieee.org/document/10944544/","RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
Fast-Converging Deep Reinforcement Learning for Optimal Dispatch of Large-Scale Power Systems Under Transient Security Constraints
Power system optimal dispatch with transient security constraints is commonly represented as transient security-constrained optimal power flow (TSC-OPF). Deep reinforcement learning (DRL)-based TSC-OPF trains efficient decision-making agents that are adaptable to various scenarios and provide solution results quickly. However, due to the high dimensionality of the state space and action spaces, as well as the non-smoothness of dynamic constraints, existing DRL-based TSC-OPF solution methods face a significant challenge of the sparse reward problem. To address this issue, a fast-converging DRL method for optimal dispatch of large-scale power systems under transient security constraints is proposed in this paper. The Markov decision process (MDP) modeling of TSC-OPF is improved by reducing the observation space and smoothing the reward design, thus facilitating agent training. An improved deep deterministic policy gradient algorithm with curriculum learning, parallel exploration, and ensemble decision-making (DDPG-CL-PE-ED) is introduced to drastically enhance the efficiency of agent training and the accuracy of decision-making. The effectiveness, efficiency, and accuracy of the proposed method are demonstrated through experiments in the IEEE 39-bus system and a practical 710-bus regional power grid. The source code of the proposed method is made public on GitHub.
期刊介绍:
Journal of Modern Power Systems and Clean Energy (MPCE), commencing from June, 2013, is a newly established, peer-reviewed and quarterly published journal in English. It is the first international power engineering journal originated in mainland China. MPCE publishes original papers, short letters and review articles in the field of modern power systems with focus on smart grid technology and renewable energy integration, etc.