{"title":"配电网电压无功控制的进化深度强化学习","authors":"Ruiqi Si, Tianlu Gao, Yuxin Dai, Yuyang Bai, Yuqi Jiang, Jun Zhang","doi":"10.1109/DTPI55838.2022.9998947","DOIUrl":null,"url":null,"abstract":"As an important form of renewable energy integrated to the power system, distribution network is being challenged by voltage violation and network loss increase. Currently, model-based Vol-Var control (VVC) methods are widely used to reduce voltage violation and network loss. However, model-based methods need accurate parameters of distribution network. In practice, accurate model is difficult to obtain. In this paper, we propose a model-free evolutionary deep reinforcement learning (E-DRL) algorithm to solve the VVC problem. Based on E-DRL, the agent evolves autonomously by continuously interacting with the environment learning control strategy. Inverter-based PVs and SVGs are used to provide fast and continuous control. VVC problem is solved by soft actor-critic algorithm, which uses the maximum entropy technique to balance the exploration and exploitation. Numerical simulations on IEEE 13-bus system demonstrate that the proposed method has satisfied performance.","PeriodicalId":409822,"journal":{"name":"2022 IEEE 2nd International Conference on Digital Twins and Parallel Intelligence (DTPI)","volume":"49 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Evolutionary Deep Reinforcement Learning for Volt-VAR Control in Distribution Network\",\"authors\":\"Ruiqi Si, Tianlu Gao, Yuxin Dai, Yuyang Bai, Yuqi Jiang, Jun Zhang\",\"doi\":\"10.1109/DTPI55838.2022.9998947\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"As an important form of renewable energy integrated to the power system, distribution network is being challenged by voltage violation and network loss increase. Currently, model-based Vol-Var control (VVC) methods are widely used to reduce voltage violation and network loss. However, model-based methods need accurate parameters of distribution network. In practice, accurate model is difficult to obtain. In this paper, we propose a model-free evolutionary deep reinforcement learning (E-DRL) algorithm to solve the VVC problem. Based on E-DRL, the agent evolves autonomously by continuously interacting with the environment learning control strategy. Inverter-based PVs and SVGs are used to provide fast and continuous control. VVC problem is solved by soft actor-critic algorithm, which uses the maximum entropy technique to balance the exploration and exploitation. Numerical simulations on IEEE 13-bus system demonstrate that the proposed method has satisfied performance.\",\"PeriodicalId\":409822,\"journal\":{\"name\":\"2022 IEEE 2nd International Conference on Digital Twins and Parallel Intelligence (DTPI)\",\"volume\":\"49 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-10-24\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 IEEE 2nd International Conference on Digital Twins and Parallel Intelligence (DTPI)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/DTPI55838.2022.9998947\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE 2nd International Conference on Digital Twins and Parallel Intelligence (DTPI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/DTPI55838.2022.9998947","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Evolutionary Deep Reinforcement Learning for Volt-VAR Control in Distribution Network
As an important form of renewable energy integrated to the power system, distribution network is being challenged by voltage violation and network loss increase. Currently, model-based Vol-Var control (VVC) methods are widely used to reduce voltage violation and network loss. However, model-based methods need accurate parameters of distribution network. In practice, accurate model is difficult to obtain. In this paper, we propose a model-free evolutionary deep reinforcement learning (E-DRL) algorithm to solve the VVC problem. Based on E-DRL, the agent evolves autonomously by continuously interacting with the environment learning control strategy. Inverter-based PVs and SVGs are used to provide fast and continuous control. VVC problem is solved by soft actor-critic algorithm, which uses the maximum entropy technique to balance the exploration and exploitation. Numerical simulations on IEEE 13-bus system demonstrate that the proposed method has satisfied performance.