Mukesh Gautam, Michael Abdelmalak, Mohammad MansourLakouraj, M. Benidris, H. Livani
{"title":"基于深度强化学习的配电网络弹性增强重构方法","authors":"Mukesh Gautam, Michael Abdelmalak, Mohammad MansourLakouraj, M. Benidris, H. Livani","doi":"10.1109/IAS54023.2022.9939854","DOIUrl":null,"url":null,"abstract":"This paper proposes a deep reinforcement learning (DRL)-based approach for optimal Reconfiguration of Distribution Networks to improve their Resilience (R-DNR) against extreme events and multiple line outages. The objective of the proposed framework is to minimize the amount of critical load curtailments. The distribution network is represented as a graph network, and the optimal network configuration is obtained by searching for the optimal spanning forest. The constraints to the optimization problem are the radial topology constraint and the power balance constraints. Unlike existing analytical and population-based approaches, which require the entire analysis and computation to be repeated to find the optimal network configuration for each system operating state, DRL-based R-DNR, once properly trained, can quickly determine optimal or near-optimal configuration even when system states change. The proposed R-DNR forms microgrids with distributed energy resources to reduce the critical load curtailment when multiple line outages occur in the system because of extreme events. The proposed DRL-based model learns the action-value function utilizing Q-learning, which is a model-free reinforcement learning technique. A case study on a 33-node distribution test system demonstrates the effectiveness and efficacy of the proposed approach for R-DNR.","PeriodicalId":193587,"journal":{"name":"2022 IEEE Industry Applications Society Annual Meeting (IAS)","volume":"2015 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Reconfiguration of Distribution Networks for Resilience Enhancement: A Deep Reinforcement Learning-based Approach\",\"authors\":\"Mukesh Gautam, Michael Abdelmalak, Mohammad MansourLakouraj, M. Benidris, H. Livani\",\"doi\":\"10.1109/IAS54023.2022.9939854\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"This paper proposes a deep reinforcement learning (DRL)-based approach for optimal Reconfiguration of Distribution Networks to improve their Resilience (R-DNR) against extreme events and multiple line outages. The objective of the proposed framework is to minimize the amount of critical load curtailments. The distribution network is represented as a graph network, and the optimal network configuration is obtained by searching for the optimal spanning forest. The constraints to the optimization problem are the radial topology constraint and the power balance constraints. Unlike existing analytical and population-based approaches, which require the entire analysis and computation to be repeated to find the optimal network configuration for each system operating state, DRL-based R-DNR, once properly trained, can quickly determine optimal or near-optimal configuration even when system states change. The proposed R-DNR forms microgrids with distributed energy resources to reduce the critical load curtailment when multiple line outages occur in the system because of extreme events. The proposed DRL-based model learns the action-value function utilizing Q-learning, which is a model-free reinforcement learning technique. A case study on a 33-node distribution test system demonstrates the effectiveness and efficacy of the proposed approach for R-DNR.\",\"PeriodicalId\":193587,\"journal\":{\"name\":\"2022 IEEE Industry Applications Society Annual Meeting (IAS)\",\"volume\":\"2015 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-10-09\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 IEEE Industry Applications Society Annual Meeting (IAS)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/IAS54023.2022.9939854\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE Industry Applications Society Annual Meeting (IAS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IAS54023.2022.9939854","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Reconfiguration of Distribution Networks for Resilience Enhancement: A Deep Reinforcement Learning-based Approach
This paper proposes a deep reinforcement learning (DRL)-based approach for optimal Reconfiguration of Distribution Networks to improve their Resilience (R-DNR) against extreme events and multiple line outages. The objective of the proposed framework is to minimize the amount of critical load curtailments. The distribution network is represented as a graph network, and the optimal network configuration is obtained by searching for the optimal spanning forest. The constraints to the optimization problem are the radial topology constraint and the power balance constraints. Unlike existing analytical and population-based approaches, which require the entire analysis and computation to be repeated to find the optimal network configuration for each system operating state, DRL-based R-DNR, once properly trained, can quickly determine optimal or near-optimal configuration even when system states change. The proposed R-DNR forms microgrids with distributed energy resources to reduce the critical load curtailment when multiple line outages occur in the system because of extreme events. The proposed DRL-based model learns the action-value function utilizing Q-learning, which is a model-free reinforcement learning technique. A case study on a 33-node distribution test system demonstrates the effectiveness and efficacy of the proposed approach for R-DNR.