{"title":"A Deep Reinforcement Learning-based Approach to Post-Disaster Routing of Movable Energy Resources","authors":"Mukesh Gautam, N. Bhusal, M. Benidris","doi":"10.1109/IAS54023.2022.9940073","DOIUrl":null,"url":null,"abstract":"After the occurrence of an extreme event, movable energy resources (MERs) can be an effective way to restore criti-cal loads to enhance power system resilience when no other forms of energy sources are available. Since the optimal locations of MERs after an extreme event are dependent on system operating states (e.g., loads at each node, on/off status of system branches, etc.), existing analytical and population-based approaches must repeat the entire analysis and computation when the system operating states change. Conversely, deep reinforcement learning (DRL)-based approaches can quickly determine optimal or near-optimal locations despite changes in system states if they are adequately trained with a variety of scenarios. The optimal deployment of MERs to improve power system resilience is proposed using a Deep Q-Learning-based approach. If they are available, MERs can also be used to supplement other types of resources. Following an extreme event, the proposed approach operates in two stages. The distribution network is modeled as a graph in the first stage, and Kruskal's spanning forest search algorithm (KSFSA) is used to reconfigure the network using tie-switches. The optimal or near-optimal locations of MERs are determined in the second stage to maximize critical load recovery. A case study on a 33-node distribution test system demonstrates the effectiveness and efficacy of the proposed approach for post-disaster routing of MERs.","PeriodicalId":193587,"journal":{"name":"2022 IEEE Industry Applications Society Annual Meeting (IAS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE Industry Applications Society Annual Meeting (IAS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IAS54023.2022.9940073","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2
Abstract
After the occurrence of an extreme event, movable energy resources (MERs) can be an effective way to restore criti-cal loads to enhance power system resilience when no other forms of energy sources are available. Since the optimal locations of MERs after an extreme event are dependent on system operating states (e.g., loads at each node, on/off status of system branches, etc.), existing analytical and population-based approaches must repeat the entire analysis and computation when the system operating states change. Conversely, deep reinforcement learning (DRL)-based approaches can quickly determine optimal or near-optimal locations despite changes in system states if they are adequately trained with a variety of scenarios. The optimal deployment of MERs to improve power system resilience is proposed using a Deep Q-Learning-based approach. If they are available, MERs can also be used to supplement other types of resources. Following an extreme event, the proposed approach operates in two stages. The distribution network is modeled as a graph in the first stage, and Kruskal's spanning forest search algorithm (KSFSA) is used to reconfigure the network using tie-switches. The optimal or near-optimal locations of MERs are determined in the second stage to maximize critical load recovery. A case study on a 33-node distribution test system demonstrates the effectiveness and efficacy of the proposed approach for post-disaster routing of MERs.