Changxu Jiang , Chenxi Liu , Yujuan Yuan , Junjie Lin , Zhenguo Shao , Chen Guo , Zhenjia Lin
{"title":"Emergency voltage control strategy for power system transient stability enhancement based on edge graph convolutional network reinforcement learning","authors":"Changxu Jiang , Chenxi Liu , Yujuan Yuan , Junjie Lin , Zhenguo Shao , Chen Guo , Zhenjia Lin","doi":"10.1016/j.segan.2024.101527","DOIUrl":null,"url":null,"abstract":"<div><div>Emergency control is essential for maintaining the stability of power systems, serving as a key defense mechanism against the destabilization and cascading failures triggered by faults. Under-voltage load shedding is a popular and effective approach for emergency control. However, with the increasing complexity and scale of power systems and the rise in uncertainty factors, traditional approaches struggle with computation speed, accuracy, and scalability issues. Deep reinforcement learning holds significant potential for the power system decision-making problems. However, existing deep reinforcement learning algorithms have limitations in effectively leveraging diverse operational features, which affects the reliability and efficiency of emergency control strategies. This paper presents an innovative approach for real-time emergency voltage control strategies for transient stability enhancement through the integration of edge-graph convolutional networks with reinforcement learning. This method transforms the traditional emergency control optimization problem into a sequential decision-making process. By utilizing the edge-graph convolutional neural network, it efficiently extracts critical information on the correlation between the power system operation status and node branch information, as well as the uncertainty factors involved. Moreover, the clipped double Q-learning, delayed policy update, and target policy smoothing are introduced to effectively solve the issues of overestimation and abnormal sensitivity to hyperparameters in the deep deterministic policy gradient algorithm. The effectiveness of the proposed method in emergency control decision-making is verified by the IEEE 39-bus system and the IEEE 118-bus system.</div></div>","PeriodicalId":56142,"journal":{"name":"Sustainable Energy Grids & Networks","volume":"40 ","pages":"Article 101527"},"PeriodicalIF":4.8000,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Sustainable Energy Grids & Networks","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S235246772400256X","RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ENERGY & FUELS","Score":null,"Total":0}
引用次数: 0
Abstract
Emergency control is essential for maintaining the stability of power systems, serving as a key defense mechanism against the destabilization and cascading failures triggered by faults. Under-voltage load shedding is a popular and effective approach for emergency control. However, with the increasing complexity and scale of power systems and the rise in uncertainty factors, traditional approaches struggle with computation speed, accuracy, and scalability issues. Deep reinforcement learning holds significant potential for the power system decision-making problems. However, existing deep reinforcement learning algorithms have limitations in effectively leveraging diverse operational features, which affects the reliability and efficiency of emergency control strategies. This paper presents an innovative approach for real-time emergency voltage control strategies for transient stability enhancement through the integration of edge-graph convolutional networks with reinforcement learning. This method transforms the traditional emergency control optimization problem into a sequential decision-making process. By utilizing the edge-graph convolutional neural network, it efficiently extracts critical information on the correlation between the power system operation status and node branch information, as well as the uncertainty factors involved. Moreover, the clipped double Q-learning, delayed policy update, and target policy smoothing are introduced to effectively solve the issues of overestimation and abnormal sensitivity to hyperparameters in the deep deterministic policy gradient algorithm. The effectiveness of the proposed method in emergency control decision-making is verified by the IEEE 39-bus system and the IEEE 118-bus system.
期刊介绍:
Sustainable Energy, Grids and Networks (SEGAN)is an international peer-reviewed publication for theoretical and applied research dealing with energy, information grids and power networks, including smart grids from super to micro grid scales. SEGAN welcomes papers describing fundamental advances in mathematical, statistical or computational methods with application to power and energy systems, as well as papers on applications, computation and modeling in the areas of electrical and energy systems with coupled information and communication technologies.