基于深度强化学习的配电网络弹性增强重构方法

Mukesh Gautam, Michael Abdelmalak, Mohammad MansourLakouraj, M. Benidris, H. Livani
{"title":"基于深度强化学习的配电网络弹性增强重构方法","authors":"Mukesh Gautam, Michael Abdelmalak, Mohammad MansourLakouraj, M. Benidris, H. Livani","doi":"10.1109/IAS54023.2022.9939854","DOIUrl":null,"url":null,"abstract":"This paper proposes a deep reinforcement learning (DRL)-based approach for optimal Reconfiguration of Distribution Networks to improve their Resilience (R-DNR) against extreme events and multiple line outages. The objective of the proposed framework is to minimize the amount of critical load curtailments. The distribution network is represented as a graph network, and the optimal network configuration is obtained by searching for the optimal spanning forest. The constraints to the optimization problem are the radial topology constraint and the power balance constraints. Unlike existing analytical and population-based approaches, which require the entire analysis and computation to be repeated to find the optimal network configuration for each system operating state, DRL-based R-DNR, once properly trained, can quickly determine optimal or near-optimal configuration even when system states change. The proposed R-DNR forms microgrids with distributed energy resources to reduce the critical load curtailment when multiple line outages occur in the system because of extreme events. The proposed DRL-based model learns the action-value function utilizing Q-learning, which is a model-free reinforcement learning technique. A case study on a 33-node distribution test system demonstrates the effectiveness and efficacy of the proposed approach for R-DNR.","PeriodicalId":193587,"journal":{"name":"2022 IEEE Industry Applications Society Annual Meeting (IAS)","volume":"2015 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Reconfiguration of Distribution Networks for Resilience Enhancement: A Deep Reinforcement Learning-based Approach\",\"authors\":\"Mukesh Gautam, Michael Abdelmalak, Mohammad MansourLakouraj, M. Benidris, H. Livani\",\"doi\":\"10.1109/IAS54023.2022.9939854\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"This paper proposes a deep reinforcement learning (DRL)-based approach for optimal Reconfiguration of Distribution Networks to improve their Resilience (R-DNR) against extreme events and multiple line outages. The objective of the proposed framework is to minimize the amount of critical load curtailments. The distribution network is represented as a graph network, and the optimal network configuration is obtained by searching for the optimal spanning forest. The constraints to the optimization problem are the radial topology constraint and the power balance constraints. Unlike existing analytical and population-based approaches, which require the entire analysis and computation to be repeated to find the optimal network configuration for each system operating state, DRL-based R-DNR, once properly trained, can quickly determine optimal or near-optimal configuration even when system states change. The proposed R-DNR forms microgrids with distributed energy resources to reduce the critical load curtailment when multiple line outages occur in the system because of extreme events. The proposed DRL-based model learns the action-value function utilizing Q-learning, which is a model-free reinforcement learning technique. A case study on a 33-node distribution test system demonstrates the effectiveness and efficacy of the proposed approach for R-DNR.\",\"PeriodicalId\":193587,\"journal\":{\"name\":\"2022 IEEE Industry Applications Society Annual Meeting (IAS)\",\"volume\":\"2015 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-10-09\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 IEEE Industry Applications Society Annual Meeting (IAS)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/IAS54023.2022.9939854\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE Industry Applications Society Annual Meeting (IAS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IAS54023.2022.9939854","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

摘要

本文提出了一种基于深度强化学习(DRL)的配电网络优化重构方法,以提高其对极端事件和多线中断的弹性(R-DNR)。拟议框架的目标是尽量减少临界负荷削减的数量。将配电网表示为图网络,通过搜索最优生成森林得到最优配电网配置。优化问题的约束是径向拓扑约束和功率平衡约束。现有的分析和基于种群的方法需要重复进行整个分析和计算才能找到每个系统运行状态下的最佳网络配置,而基于drl的R-DNR,一旦经过适当训练,即使在系统状态发生变化时,也可以快速确定最佳或接近最佳的配置。本文提出的R-DNR形成具有分布式能源的微电网,以减少由于极端事件导致系统中出现多路停电时的临界负荷削减。提出的基于drl的模型利用无模型强化学习技术Q-learning学习动作值函数。通过对33节点分布测试系统的实例研究,验证了该方法的有效性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Reconfiguration of Distribution Networks for Resilience Enhancement: A Deep Reinforcement Learning-based Approach
This paper proposes a deep reinforcement learning (DRL)-based approach for optimal Reconfiguration of Distribution Networks to improve their Resilience (R-DNR) against extreme events and multiple line outages. The objective of the proposed framework is to minimize the amount of critical load curtailments. The distribution network is represented as a graph network, and the optimal network configuration is obtained by searching for the optimal spanning forest. The constraints to the optimization problem are the radial topology constraint and the power balance constraints. Unlike existing analytical and population-based approaches, which require the entire analysis and computation to be repeated to find the optimal network configuration for each system operating state, DRL-based R-DNR, once properly trained, can quickly determine optimal or near-optimal configuration even when system states change. The proposed R-DNR forms microgrids with distributed energy resources to reduce the critical load curtailment when multiple line outages occur in the system because of extreme events. The proposed DRL-based model learns the action-value function utilizing Q-learning, which is a model-free reinforcement learning technique. A case study on a 33-node distribution test system demonstrates the effectiveness and efficacy of the proposed approach for R-DNR.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信