Shu Zheng;Zhi Wu;Xiao Zhang;Wei Gu;Jingtao Zhao;Zhihua Xu
{"title":"配电网电压无功控制的离线训练在线执行框架","authors":"Shu Zheng;Zhi Wu;Xiao Zhang;Wei Gu;Jingtao Zhao;Zhihua Xu","doi":"10.35833/MPCE.2024.000887","DOIUrl":null,"url":null,"abstract":"With the increasing integration of uncertain distributed renewable energies (DREs) into distribution networks (DNs), communication bottlenecks and the limited deployment of measurement devices pose significant challenges for advanced data-driven voltage control strategies such as deep reinforcement learning (DRL). To address these issues, this paper proposes an offline-training online-execution framework for volt-var control in DNs. In the offline-training phase, a graph convolutional network (GCN) -based denoising autoencoder (DAE), referred to as the deep learning (DL) agent, is designed and trained to capture spatial correlations among limited physical quantities. This agent predicts voltage values for nodes with missing measurements using historical load data, DRE outputs, and global voltages from simulations. Furthermore, the dual-timescale voltage control problem is formulated as a multi-agent Markov decision process. A DRL agent employing the multi-agent soft actor-critic (MASAC) algorithm is trained to regulate the tap position of on-load tap changer (OLTC) and reactive power output of photovoltaic (PV) inverters. In the online-execution phase, the DL agent supplements the limited measurement data, providing enhanced global observations for the DRL agent. This enables precise equipment control based on improved system state estimation. The proposed framework is validated on two modified IEEE test systems. Numerical results demonstrate its ability to effectively reconstruct missing measurements and achieve rapid, and accurate voltage control even under severe measurement deficiencies.","PeriodicalId":51326,"journal":{"name":"Journal of Modern Power Systems and Clean Energy","volume":"13 5","pages":"1726-1737"},"PeriodicalIF":6.1000,"publicationDate":"2025-02-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10899105","citationCount":"0","resultStr":"{\"title\":\"Offline-Training Online-Execution Framework for Volt-Var Control in Distribution Networks\",\"authors\":\"Shu Zheng;Zhi Wu;Xiao Zhang;Wei Gu;Jingtao Zhao;Zhihua Xu\",\"doi\":\"10.35833/MPCE.2024.000887\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"With the increasing integration of uncertain distributed renewable energies (DREs) into distribution networks (DNs), communication bottlenecks and the limited deployment of measurement devices pose significant challenges for advanced data-driven voltage control strategies such as deep reinforcement learning (DRL). To address these issues, this paper proposes an offline-training online-execution framework for volt-var control in DNs. In the offline-training phase, a graph convolutional network (GCN) -based denoising autoencoder (DAE), referred to as the deep learning (DL) agent, is designed and trained to capture spatial correlations among limited physical quantities. This agent predicts voltage values for nodes with missing measurements using historical load data, DRE outputs, and global voltages from simulations. Furthermore, the dual-timescale voltage control problem is formulated as a multi-agent Markov decision process. A DRL agent employing the multi-agent soft actor-critic (MASAC) algorithm is trained to regulate the tap position of on-load tap changer (OLTC) and reactive power output of photovoltaic (PV) inverters. In the online-execution phase, the DL agent supplements the limited measurement data, providing enhanced global observations for the DRL agent. This enables precise equipment control based on improved system state estimation. The proposed framework is validated on two modified IEEE test systems. Numerical results demonstrate its ability to effectively reconstruct missing measurements and achieve rapid, and accurate voltage control even under severe measurement deficiencies.\",\"PeriodicalId\":51326,\"journal\":{\"name\":\"Journal of Modern Power Systems and Clean Energy\",\"volume\":\"13 5\",\"pages\":\"1726-1737\"},\"PeriodicalIF\":6.1000,\"publicationDate\":\"2025-02-21\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10899105\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Modern Power Systems and Clean Energy\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10899105/\",\"RegionNum\":1,\"RegionCategory\":\"工程技术\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"ENGINEERING, ELECTRICAL & ELECTRONIC\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Modern Power Systems and Clean Energy","FirstCategoryId":"5","ListUrlMain":"https://ieeexplore.ieee.org/document/10899105/","RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
Offline-Training Online-Execution Framework for Volt-Var Control in Distribution Networks
With the increasing integration of uncertain distributed renewable energies (DREs) into distribution networks (DNs), communication bottlenecks and the limited deployment of measurement devices pose significant challenges for advanced data-driven voltage control strategies such as deep reinforcement learning (DRL). To address these issues, this paper proposes an offline-training online-execution framework for volt-var control in DNs. In the offline-training phase, a graph convolutional network (GCN) -based denoising autoencoder (DAE), referred to as the deep learning (DL) agent, is designed and trained to capture spatial correlations among limited physical quantities. This agent predicts voltage values for nodes with missing measurements using historical load data, DRE outputs, and global voltages from simulations. Furthermore, the dual-timescale voltage control problem is formulated as a multi-agent Markov decision process. A DRL agent employing the multi-agent soft actor-critic (MASAC) algorithm is trained to regulate the tap position of on-load tap changer (OLTC) and reactive power output of photovoltaic (PV) inverters. In the online-execution phase, the DL agent supplements the limited measurement data, providing enhanced global observations for the DRL agent. This enables precise equipment control based on improved system state estimation. The proposed framework is validated on two modified IEEE test systems. Numerical results demonstrate its ability to effectively reconstruct missing measurements and achieve rapid, and accurate voltage control even under severe measurement deficiencies.
期刊介绍:
Journal of Modern Power Systems and Clean Energy (MPCE), commencing from June, 2013, is a newly established, peer-reviewed and quarterly published journal in English. It is the first international power engineering journal originated in mainland China. MPCE publishes original papers, short letters and review articles in the field of modern power systems with focus on smart grid technology and renewable energy integration, etc.