{"title":"基于多智能体深度强化学习的可再生能源主动配电网重构","authors":"Zheng Lin, Changxu Jiang, Yuejun Lu, Chenxi Liu","doi":"10.1109/CEEPE58418.2023.10166046","DOIUrl":null,"url":null,"abstract":"Distributed generation (DG) represented by wind turbines and photovoltaic systems has been extensively connected to the power distribution network (DN). However, the random fluctuations of DG pose new challenges to the safety, stability, and economic performance of DN, while distribution network reconfiguration (DNR) technology can alleviate this problem to some extent. Traditional heuristic algorithms are difficult to deal with uncertainties in the source-load and the increasing complexity of DN. Therefore, this paper proposes an active DNR method based on a model-free multi-agent deep deterministic policy gradient reinforcement learning framework (MADDPG). Firstly, the number of fundamental loops in the distribution network are determined and agent for each fundamental loop are deployed. Each agent has an actor and a critic network, which can control operations of the branch switches in the loop. Next, a mathematical model of DNR will be constructed. Then, a MADDPG training framework for distribution network reconfiguration is built, which adopts centralized training and distributed execution. Finally, the simulation cases are performed on an improved IEEE 33-bus power system to prove the effectiveness of MADDPG algorithm. The results illustrate that MADDPG algorithm can improve the economic and stability performance of the distribution network to some extent, demonstrating the effectiveness of the proposed approach.","PeriodicalId":431552,"journal":{"name":"2023 6th International Conference on Energy, Electrical and Power Engineering (CEEPE)","volume":"35 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-05-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Active Distribution Network Reconfiguration with Renewable Energy Based on Multi-agent Deep Reinforcement Learning\",\"authors\":\"Zheng Lin, Changxu Jiang, Yuejun Lu, Chenxi Liu\",\"doi\":\"10.1109/CEEPE58418.2023.10166046\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Distributed generation (DG) represented by wind turbines and photovoltaic systems has been extensively connected to the power distribution network (DN). However, the random fluctuations of DG pose new challenges to the safety, stability, and economic performance of DN, while distribution network reconfiguration (DNR) technology can alleviate this problem to some extent. Traditional heuristic algorithms are difficult to deal with uncertainties in the source-load and the increasing complexity of DN. Therefore, this paper proposes an active DNR method based on a model-free multi-agent deep deterministic policy gradient reinforcement learning framework (MADDPG). Firstly, the number of fundamental loops in the distribution network are determined and agent for each fundamental loop are deployed. Each agent has an actor and a critic network, which can control operations of the branch switches in the loop. Next, a mathematical model of DNR will be constructed. Then, a MADDPG training framework for distribution network reconfiguration is built, which adopts centralized training and distributed execution. Finally, the simulation cases are performed on an improved IEEE 33-bus power system to prove the effectiveness of MADDPG algorithm. The results illustrate that MADDPG algorithm can improve the economic and stability performance of the distribution network to some extent, demonstrating the effectiveness of the proposed approach.\",\"PeriodicalId\":431552,\"journal\":{\"name\":\"2023 6th International Conference on Energy, Electrical and Power Engineering (CEEPE)\",\"volume\":\"35 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-05-12\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2023 6th International Conference on Energy, Electrical and Power Engineering (CEEPE)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/CEEPE58418.2023.10166046\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 6th International Conference on Energy, Electrical and Power Engineering (CEEPE)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CEEPE58418.2023.10166046","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Active Distribution Network Reconfiguration with Renewable Energy Based on Multi-agent Deep Reinforcement Learning
Distributed generation (DG) represented by wind turbines and photovoltaic systems has been extensively connected to the power distribution network (DN). However, the random fluctuations of DG pose new challenges to the safety, stability, and economic performance of DN, while distribution network reconfiguration (DNR) technology can alleviate this problem to some extent. Traditional heuristic algorithms are difficult to deal with uncertainties in the source-load and the increasing complexity of DN. Therefore, this paper proposes an active DNR method based on a model-free multi-agent deep deterministic policy gradient reinforcement learning framework (MADDPG). Firstly, the number of fundamental loops in the distribution network are determined and agent for each fundamental loop are deployed. Each agent has an actor and a critic network, which can control operations of the branch switches in the loop. Next, a mathematical model of DNR will be constructed. Then, a MADDPG training framework for distribution network reconfiguration is built, which adopts centralized training and distributed execution. Finally, the simulation cases are performed on an improved IEEE 33-bus power system to prove the effectiveness of MADDPG algorithm. The results illustrate that MADDPG algorithm can improve the economic and stability performance of the distribution network to some extent, demonstrating the effectiveness of the proposed approach.