{"title":"Deep Q-Learning-based Distribution Network Reconfiguration for Reliability Improvement","authors":"Mukesh Gautam, N. Bhusal, M. Benidris","doi":"10.1109/td43745.2022.9817000","DOIUrl":null,"url":null,"abstract":"Distribution network reconfiguration (DNR) has proved to be an economical and effective way to improve the reliability of distribution systems. As optimal network configuration depends on system operating states (e.g., loads at each node), existing analytical and population-based approaches need to repeat the entire analysis and computation to find the optimal network configuration with a change in system operating states. Contrary to this, if properly trained, deep reinforcement learning (DRL)-based DNR can determine optimal or nearoptimal configuration quickly even with changes in system states. In this paper, a Deep Q Learning-based framework is proposed for the optimal DNR to improve reliability of the system. An optimization problem is formulated with an objective function that minimizes the average curtailed power. Constraints of the optimization problem are radial topology constraint and all nodes traversing constraint. The distribution network is modeled as a graph and the optimal network configuration is determined by searching for an optimal spanning tree. The optimal spanning tree is the spanning tree with the minimum value of the average curtailed power. The effectiveness of the proposed framework is demonstrated through several case studies on 33-node and 69node distribution test systems.","PeriodicalId":241987,"journal":{"name":"2022 IEEE/PES Transmission and Distribution Conference and Exposition (T&D)","volume":"47 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE/PES Transmission and Distribution Conference and Exposition (T&D)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/td43745.2022.9817000","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 4
Abstract
Distribution network reconfiguration (DNR) has proved to be an economical and effective way to improve the reliability of distribution systems. As optimal network configuration depends on system operating states (e.g., loads at each node), existing analytical and population-based approaches need to repeat the entire analysis and computation to find the optimal network configuration with a change in system operating states. Contrary to this, if properly trained, deep reinforcement learning (DRL)-based DNR can determine optimal or nearoptimal configuration quickly even with changes in system states. In this paper, a Deep Q Learning-based framework is proposed for the optimal DNR to improve reliability of the system. An optimization problem is formulated with an objective function that minimizes the average curtailed power. Constraints of the optimization problem are radial topology constraint and all nodes traversing constraint. The distribution network is modeled as a graph and the optimal network configuration is determined by searching for an optimal spanning tree. The optimal spanning tree is the spanning tree with the minimum value of the average curtailed power. The effectiveness of the proposed framework is demonstrated through several case studies on 33-node and 69node distribution test systems.