基于多智能体深度强化学习的柔性资源集群区域电网调度策略研究

IF 2.7 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC
IET Smart Grid Pub Date : 2025-08-19 DOI:10.1049/stg2.70028
Gao Guanzhong, Yaping Li, Shengchun Yang, Jiahao Yan, Kedong Zhu, Jianguo Yao, Wenbo Mao
{"title":"基于多智能体深度强化学习的柔性资源集群区域电网调度策略研究","authors":"Gao Guanzhong,&nbsp;Yaping Li,&nbsp;Shengchun Yang,&nbsp;Jiahao Yan,&nbsp;Kedong Zhu,&nbsp;Jianguo Yao,&nbsp;Wenbo Mao","doi":"10.1049/stg2.70028","DOIUrl":null,"url":null,"abstract":"<p>The increasing integration of distributed energy resources, controllable loads and energy storage systems is reshaping power systems by enhancing flexibility in supply–demand balancing. However, their large-scale deployment imposes significant communication and computational burdens on dispatch centres. Traditional model-driven scheduling methods often struggle to maintain efficiency and fairness among stakeholders, whereas existing deep reinforcement learning approaches lack mechanisms to address real-time response deviations within resource clusters leading to unstable policy performance. To tackle these challenges, this paper proposes a real-time scheduling strategy for partitioned power grids based on multi-agent deep reinforcement learning. A hierarchical distributed control framework is developed, where different agents manage regional grids and coordinate decision-making across flexible resource clusters. The framework adopts centralised training and distributed execution integrating real-time regulation performance as a regularisation term in agent rewards to improve learning stability and decision efficiency. Simulation results under varying renewable energy penetration levels demonstrate that the proposed method enhances scheduling performance and system robustness. This approach provides a promising solution for managing large-scale flexible resources and contributes to the intelligent operation of new type power systems.</p>","PeriodicalId":36490,"journal":{"name":"IET Smart Grid","volume":"8 1","pages":""},"PeriodicalIF":2.7000,"publicationDate":"2025-08-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/stg2.70028","citationCount":"0","resultStr":"{\"title\":\"Research on Regional Power Grid Scheduling Strategy With Flexible Resource Clusters Based on Multi-Agent Deep Reinforcement Learning\",\"authors\":\"Gao Guanzhong,&nbsp;Yaping Li,&nbsp;Shengchun Yang,&nbsp;Jiahao Yan,&nbsp;Kedong Zhu,&nbsp;Jianguo Yao,&nbsp;Wenbo Mao\",\"doi\":\"10.1049/stg2.70028\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>The increasing integration of distributed energy resources, controllable loads and energy storage systems is reshaping power systems by enhancing flexibility in supply–demand balancing. However, their large-scale deployment imposes significant communication and computational burdens on dispatch centres. Traditional model-driven scheduling methods often struggle to maintain efficiency and fairness among stakeholders, whereas existing deep reinforcement learning approaches lack mechanisms to address real-time response deviations within resource clusters leading to unstable policy performance. To tackle these challenges, this paper proposes a real-time scheduling strategy for partitioned power grids based on multi-agent deep reinforcement learning. A hierarchical distributed control framework is developed, where different agents manage regional grids and coordinate decision-making across flexible resource clusters. The framework adopts centralised training and distributed execution integrating real-time regulation performance as a regularisation term in agent rewards to improve learning stability and decision efficiency. Simulation results under varying renewable energy penetration levels demonstrate that the proposed method enhances scheduling performance and system robustness. This approach provides a promising solution for managing large-scale flexible resources and contributes to the intelligent operation of new type power systems.</p>\",\"PeriodicalId\":36490,\"journal\":{\"name\":\"IET Smart Grid\",\"volume\":\"8 1\",\"pages\":\"\"},\"PeriodicalIF\":2.7000,\"publicationDate\":\"2025-08-19\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/stg2.70028\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IET Smart Grid\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://ietresearch.onlinelibrary.wiley.com/doi/10.1049/stg2.70028\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"ENGINEERING, ELECTRICAL & ELECTRONIC\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IET Smart Grid","FirstCategoryId":"1085","ListUrlMain":"https://ietresearch.onlinelibrary.wiley.com/doi/10.1049/stg2.70028","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0

摘要

分布式能源、可控负荷和储能系统的日益融合,通过增强供需平衡的灵活性,正在重塑电力系统。然而,它们的大规模部署给调度中心带来了巨大的通信和计算负担。传统的模型驱动调度方法往往难以保持利益相关者之间的效率和公平性,而现有的深度强化学习方法缺乏解决资源集群内实时响应偏差导致策略性能不稳定的机制。为了解决这些问题,本文提出了一种基于多智能体深度强化学习的分区电网实时调度策略。开发了一个分层分布式控制框架,其中不同的代理管理区域网格并跨灵活的资源集群协调决策。该框架采用集中训练和分布式执行,将实时监管绩效作为智能体奖励的正则化项,提高学习稳定性和决策效率。在不同可再生能源渗透水平下的仿真结果表明,该方法提高了调度性能和系统鲁棒性。该方法为大规模柔性资源的管理提供了一种有前途的解决方案,有助于新型电力系统的智能化运行。
本文章由计算机程序翻译,如有差异,请以英文原文为准。

Research on Regional Power Grid Scheduling Strategy With Flexible Resource Clusters Based on Multi-Agent Deep Reinforcement Learning

Research on Regional Power Grid Scheduling Strategy With Flexible Resource Clusters Based on Multi-Agent Deep Reinforcement Learning

Research on Regional Power Grid Scheduling Strategy With Flexible Resource Clusters Based on Multi-Agent Deep Reinforcement Learning

Research on Regional Power Grid Scheduling Strategy With Flexible Resource Clusters Based on Multi-Agent Deep Reinforcement Learning

Research on Regional Power Grid Scheduling Strategy With Flexible Resource Clusters Based on Multi-Agent Deep Reinforcement Learning

The increasing integration of distributed energy resources, controllable loads and energy storage systems is reshaping power systems by enhancing flexibility in supply–demand balancing. However, their large-scale deployment imposes significant communication and computational burdens on dispatch centres. Traditional model-driven scheduling methods often struggle to maintain efficiency and fairness among stakeholders, whereas existing deep reinforcement learning approaches lack mechanisms to address real-time response deviations within resource clusters leading to unstable policy performance. To tackle these challenges, this paper proposes a real-time scheduling strategy for partitioned power grids based on multi-agent deep reinforcement learning. A hierarchical distributed control framework is developed, where different agents manage regional grids and coordinate decision-making across flexible resource clusters. The framework adopts centralised training and distributed execution integrating real-time regulation performance as a regularisation term in agent rewards to improve learning stability and decision efficiency. Simulation results under varying renewable energy penetration levels demonstrate that the proposed method enhances scheduling performance and system robustness. This approach provides a promising solution for managing large-scale flexible resources and contributes to the intelligent operation of new type power systems.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
IET Smart Grid
IET Smart Grid Computer Science-Computer Networks and Communications
CiteScore
6.70
自引率
4.30%
发文量
41
审稿时长
29 weeks
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信