R. Jacob, Steve Paul, Wenyuan Li, Souma Chowdhury, Y. Gel, J. Zhang
{"title":"利用图上的强化学习重新配置不平衡配电网络","authors":"R. Jacob, Steve Paul, Wenyuan Li, Souma Chowdhury, Y. Gel, J. Zhang","doi":"10.1109/TPEC54980.2022.9750805","DOIUrl":null,"url":null,"abstract":"The recent trend in distribution system intelligence necessitates the deployment of real-time, automated, and adaptable decision-making tools. Reconfiguring the distribution network by changing the status of switches can aid in loss minimization during normal operations and resilience enhancement during disruptive events. Traditional methods employed for solving the network reconfiguration problem are model-based and scenario-specific. Besides this, the scalability and computational efficiency also limit the utilization of such techniques for online control, which could be potentially addressed by neural network based models trained with reinforcement learning (RL). To this end, we formulate the reconfiguration problem as a Markov Decision Process where the optimal control policy is learned using the RL approach. Considering the relevance of topology in decision making and the interaction between the generation and demand at different buses, we model the power distribution network along with its state variables as a graph in the learning space. Consequently, we propose an RL over graphs where a Capsule-based graph neural network is used as the policy network. The developed model is validated on the modified IEEE 13 and 34 bus test networks.","PeriodicalId":185211,"journal":{"name":"2022 IEEE Texas Power and Energy Conference (TPEC)","volume":"83 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-02-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"8","resultStr":"{\"title\":\"Reconfiguring Unbalanced Distribution Networks using Reinforcement Learning over Graphs\",\"authors\":\"R. Jacob, Steve Paul, Wenyuan Li, Souma Chowdhury, Y. Gel, J. Zhang\",\"doi\":\"10.1109/TPEC54980.2022.9750805\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The recent trend in distribution system intelligence necessitates the deployment of real-time, automated, and adaptable decision-making tools. Reconfiguring the distribution network by changing the status of switches can aid in loss minimization during normal operations and resilience enhancement during disruptive events. Traditional methods employed for solving the network reconfiguration problem are model-based and scenario-specific. Besides this, the scalability and computational efficiency also limit the utilization of such techniques for online control, which could be potentially addressed by neural network based models trained with reinforcement learning (RL). To this end, we formulate the reconfiguration problem as a Markov Decision Process where the optimal control policy is learned using the RL approach. Considering the relevance of topology in decision making and the interaction between the generation and demand at different buses, we model the power distribution network along with its state variables as a graph in the learning space. Consequently, we propose an RL over graphs where a Capsule-based graph neural network is used as the policy network. The developed model is validated on the modified IEEE 13 and 34 bus test networks.\",\"PeriodicalId\":185211,\"journal\":{\"name\":\"2022 IEEE Texas Power and Energy Conference (TPEC)\",\"volume\":\"83 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-02-28\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"8\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 IEEE Texas Power and Energy Conference (TPEC)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/TPEC54980.2022.9750805\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE Texas Power and Energy Conference (TPEC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/TPEC54980.2022.9750805","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Reconfiguring Unbalanced Distribution Networks using Reinforcement Learning over Graphs
The recent trend in distribution system intelligence necessitates the deployment of real-time, automated, and adaptable decision-making tools. Reconfiguring the distribution network by changing the status of switches can aid in loss minimization during normal operations and resilience enhancement during disruptive events. Traditional methods employed for solving the network reconfiguration problem are model-based and scenario-specific. Besides this, the scalability and computational efficiency also limit the utilization of such techniques for online control, which could be potentially addressed by neural network based models trained with reinforcement learning (RL). To this end, we formulate the reconfiguration problem as a Markov Decision Process where the optimal control policy is learned using the RL approach. Considering the relevance of topology in decision making and the interaction between the generation and demand at different buses, we model the power distribution network along with its state variables as a graph in the learning space. Consequently, we propose an RL over graphs where a Capsule-based graph neural network is used as the policy network. The developed model is validated on the modified IEEE 13 and 34 bus test networks.