利用物理信息神经网络构建的代理模型辅助强化学习,优化智能电网的能源管理

IF 11 1区 工程技术 Q1 ENERGY & FUELS
Julen Cestero , Carmine Delle Femine , Kenji S. Muro , Marco Quartulli , Marcello Restelli
{"title":"利用物理信息神经网络构建的代理模型辅助强化学习,优化智能电网的能源管理","authors":"Julen Cestero ,&nbsp;Carmine Delle Femine ,&nbsp;Kenji S. Muro ,&nbsp;Marco Quartulli ,&nbsp;Marcello Restelli","doi":"10.1016/j.apenergy.2025.126750","DOIUrl":null,"url":null,"abstract":"<div><div>Optimizing the energy management within a smart grid scenario presents significant challenges, primarily due to the complexity of real-world systems and the intricate interactions among various components. Reinforcement Learning (RL) is gaining prominence as a solution for addressing the challenges of Optimal Power Flow (OPF) in smart grids. However, RL needs to iterate compulsively throughout a given environment to obtain the optimal policy. This means obtaining samples from a, most likely, costly simulator, which can lead to a sample efficiency problem. In this work, we address this problem by substituting costly smart grid simulators with surrogate models built using Physics-Informed Neural Networks (PINNs), optimizing the RL policy training process by arriving at convergent results in a fraction of the time employed by the original environment. Specifically, we tested the performance of our PINN surrogate against other state-of-the-art data-driven surrogates and found that the understanding of the underlying physical nature of the problem makes the PINN surrogate the only method we studied capable of learning a good RL policy, in addition to not having to use samples from the real simulator. Our work shows that, by employing PINN surrogates, we can improve training speed by 50 %, compared to training the RL policy without using any surrogate model, enabling us to achieve results with scores on par with the original simulator more rapidly.</div></div>","PeriodicalId":246,"journal":{"name":"Applied Energy","volume":"401 ","pages":"Article 126750"},"PeriodicalIF":11.0000,"publicationDate":"2025-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Optimizing energy management of smart grid using reinforcement learning aided by surrogate models built using physics-informed neural networks\",\"authors\":\"Julen Cestero ,&nbsp;Carmine Delle Femine ,&nbsp;Kenji S. Muro ,&nbsp;Marco Quartulli ,&nbsp;Marcello Restelli\",\"doi\":\"10.1016/j.apenergy.2025.126750\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Optimizing the energy management within a smart grid scenario presents significant challenges, primarily due to the complexity of real-world systems and the intricate interactions among various components. Reinforcement Learning (RL) is gaining prominence as a solution for addressing the challenges of Optimal Power Flow (OPF) in smart grids. However, RL needs to iterate compulsively throughout a given environment to obtain the optimal policy. This means obtaining samples from a, most likely, costly simulator, which can lead to a sample efficiency problem. In this work, we address this problem by substituting costly smart grid simulators with surrogate models built using Physics-Informed Neural Networks (PINNs), optimizing the RL policy training process by arriving at convergent results in a fraction of the time employed by the original environment. Specifically, we tested the performance of our PINN surrogate against other state-of-the-art data-driven surrogates and found that the understanding of the underlying physical nature of the problem makes the PINN surrogate the only method we studied capable of learning a good RL policy, in addition to not having to use samples from the real simulator. Our work shows that, by employing PINN surrogates, we can improve training speed by 50 %, compared to training the RL policy without using any surrogate model, enabling us to achieve results with scores on par with the original simulator more rapidly.</div></div>\",\"PeriodicalId\":246,\"journal\":{\"name\":\"Applied Energy\",\"volume\":\"401 \",\"pages\":\"Article 126750\"},\"PeriodicalIF\":11.0000,\"publicationDate\":\"2025-09-25\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Applied Energy\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0306261925014801\",\"RegionNum\":1,\"RegionCategory\":\"工程技术\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"ENERGY & FUELS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Applied Energy","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0306261925014801","RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENERGY & FUELS","Score":null,"Total":0}
引用次数: 0

摘要

优化智能电网场景中的能源管理提出了重大挑战,主要是由于现实世界系统的复杂性和各种组件之间错综复杂的相互作用。强化学习(RL)作为解决智能电网中最优潮流(OPF)挑战的一种解决方案正日益受到重视。然而,RL需要在给定环境中强制迭代以获得最佳策略。这意味着从最可能昂贵的模拟器获取样品,这可能导致样品效率问题。在这项工作中,我们通过用使用物理信息神经网络(pinn)构建的代理模型取代昂贵的智能电网模拟器来解决这个问题,通过在原始环境所使用的一小部分时间内获得收敛结果来优化RL策略训练过程。具体来说,我们测试了我们的PINN代理与其他最先进的数据驱动代理的性能,发现对问题的潜在物理性质的理解使得PINN代理成为我们研究的唯一能够学习良好RL策略的方法,此外还不必使用来自真实模拟器的样本。我们的工作表明,与不使用任何代理模型训练RL策略相比,通过使用PINN代理,我们可以将训练速度提高50%,使我们能够更快地获得与原始模拟器相当的分数结果。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Optimizing energy management of smart grid using reinforcement learning aided by surrogate models built using physics-informed neural networks
Optimizing the energy management within a smart grid scenario presents significant challenges, primarily due to the complexity of real-world systems and the intricate interactions among various components. Reinforcement Learning (RL) is gaining prominence as a solution for addressing the challenges of Optimal Power Flow (OPF) in smart grids. However, RL needs to iterate compulsively throughout a given environment to obtain the optimal policy. This means obtaining samples from a, most likely, costly simulator, which can lead to a sample efficiency problem. In this work, we address this problem by substituting costly smart grid simulators with surrogate models built using Physics-Informed Neural Networks (PINNs), optimizing the RL policy training process by arriving at convergent results in a fraction of the time employed by the original environment. Specifically, we tested the performance of our PINN surrogate against other state-of-the-art data-driven surrogates and found that the understanding of the underlying physical nature of the problem makes the PINN surrogate the only method we studied capable of learning a good RL policy, in addition to not having to use samples from the real simulator. Our work shows that, by employing PINN surrogates, we can improve training speed by 50 %, compared to training the RL policy without using any surrogate model, enabling us to achieve results with scores on par with the original simulator more rapidly.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Applied Energy
Applied Energy 工程技术-工程:化工
CiteScore
21.20
自引率
10.70%
发文量
1830
审稿时长
41 days
期刊介绍: Applied Energy serves as a platform for sharing innovations, research, development, and demonstrations in energy conversion, conservation, and sustainable energy systems. The journal covers topics such as optimal energy resource use, environmental pollutant mitigation, and energy process analysis. It welcomes original papers, review articles, technical notes, and letters to the editor. Authors are encouraged to submit manuscripts that bridge the gap between research, development, and implementation. The journal addresses a wide spectrum of topics, including fossil and renewable energy technologies, energy economics, and environmental impacts. Applied Energy also explores modeling and forecasting, conservation strategies, and the social and economic implications of energy policies, including climate change mitigation. It is complemented by the open-access journal Advances in Applied Energy.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信