Intelligent Adjustment for Power System Operation Mode Based on Deep Reinforcement Learning

iEnergy Pub Date : 2024-12-30 DOI:10.23919/IEN.2024.0028
Wei Hu;Ning Mi;Shuang Wu;Huiling Zhang;Zhewen Hu;Lei Zhang
{"title":"Intelligent Adjustment for Power System Operation Mode Based on Deep Reinforcement Learning","authors":"Wei Hu;Ning Mi;Shuang Wu;Huiling Zhang;Zhewen Hu;Lei Zhang","doi":"10.23919/IEN.2024.0028","DOIUrl":null,"url":null,"abstract":"Power flow adjustment is a sequential decision problem. The operator makes decisions to ensure that the power flow meets the system's operational constraints, thereby obtaining a typical operating mode power flow. However, this decision-making method relies heavily on human experience, which is inefficient when the system is complex. In addition, the results given by the current evaluation system are difficult to directly guide the intelligent power flow adjustment. In order to improve the efficiency and intelligence of power flow adjustment, this paper proposes a power flow adjustment method based on deep reinforcement learning. Combining deep reinforcement learning theory with traditional power system operation mode analysis, the concept of region mapping is proposed to describe the adjustment process, so as to analyze the process of power flow calculation and manual adjustment. Considering the characteristics of power flow adjustment, a Markov decision process model suitable for power flow adjustment is constructed. On this basis, a double Q network learning method suitable for power flow adjustment is proposed. This method can adjust the power flow according to the set adjustment route, thus improving the intelligent level of power flow adjustment. The method in this paper is tested on China Electric Power Research Institute (CEPRI) test system.","PeriodicalId":100648,"journal":{"name":"iEnergy","volume":"3 4","pages":"252-260"},"PeriodicalIF":0.0000,"publicationDate":"2024-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10818561","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"iEnergy","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10818561/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Power flow adjustment is a sequential decision problem. The operator makes decisions to ensure that the power flow meets the system's operational constraints, thereby obtaining a typical operating mode power flow. However, this decision-making method relies heavily on human experience, which is inefficient when the system is complex. In addition, the results given by the current evaluation system are difficult to directly guide the intelligent power flow adjustment. In order to improve the efficiency and intelligence of power flow adjustment, this paper proposes a power flow adjustment method based on deep reinforcement learning. Combining deep reinforcement learning theory with traditional power system operation mode analysis, the concept of region mapping is proposed to describe the adjustment process, so as to analyze the process of power flow calculation and manual adjustment. Considering the characteristics of power flow adjustment, a Markov decision process model suitable for power flow adjustment is constructed. On this basis, a double Q network learning method suitable for power flow adjustment is proposed. This method can adjust the power flow according to the set adjustment route, thus improving the intelligent level of power flow adjustment. The method in this paper is tested on China Electric Power Research Institute (CEPRI) test system.
基于深度强化学习的电力系统运行模式智能调整
潮流调整是一个时序决策问题。操作者做出决策以保证潮流满足系统的运行约束,从而得到一个典型的运行模式潮流。然而,这种决策方法严重依赖于人的经验,在系统复杂时效率低下。此外,现有评价体系给出的结果难以直接指导智能潮流调整。为了提高潮流调节的效率和智能化,本文提出了一种基于深度强化学习的潮流调节方法。将深度强化学习理论与传统电力系统运行模式分析相结合,提出区域映射的概念来描述调节过程,从而分析潮流计算和人工调节过程。考虑潮流调节的特点,构造了一个适用于潮流调节的马尔可夫决策过程模型。在此基础上,提出了一种适用于潮流调节的双Q网络学习方法。该方法可以按照设定的调节路线对潮流进行调节,从而提高潮流调节的智能化水平。本文方法在中国电力科学研究院(CEPRI)测试系统上进行了测试。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信