Multiagent Energy Management System Design Using Reinforcement Learning: The New Energy Lab Training Set Case Study

IF 1.9 4区 工程技术 Q3 ENGINEERING, ELECTRICAL & ELECTRONIC
Parisa Mohammadi, Razieh Darshi, Hamidreza Gohari Darabkhani, Saeed Shamaghdari
{"title":"Multiagent Energy Management System Design Using Reinforcement Learning: The New Energy Lab Training Set Case Study","authors":"Parisa Mohammadi,&nbsp;Razieh Darshi,&nbsp;Hamidreza Gohari Darabkhani,&nbsp;Saeed Shamaghdari","doi":"10.1155/etep/3574030","DOIUrl":null,"url":null,"abstract":"<div>\n <p>This paper proposes a multiagent reinforcement learning (MARL) approach to optimize energy management in a grid-connected microgrid (MG). Renewable energy resources (RES) and customers are modeled as autonomous agents using reinforcement learning (RL) to interact with their environment. Agents are unaware of the actions or presence of others, which ensures privacy. Each agent aims to maximize its expected rewards individually. A double auction (DA) algorithm determines the price of the internal market. After market clearing, any unmet loads or excess energy are exchanged with the main grid. The New Energy Lab (NEL) at Staffordshire University is used as a case study, including wind turbines (WTs), photovoltaic (PV) panels, a fuel cell (FC), a battery, and various loads. We introduce a model-free Q-learning (QL) algorithm for managing energy in the NEL. Agents explore the environment, evaluate state-action pairs, and operate in a decentralized manner during training and implementation. The algorithm selects actions that maximize long-term value. To fairly consider the algorithms for both customers and producers, a fairness factor criterion is used. QL achieves a fairness factor of 1.2643, compared to 1.2358 for MC. It also has a shorter training time of 1483 compared with 1879.74 for MC and requires less memory, making it more efficient.</p>\n </div>","PeriodicalId":51293,"journal":{"name":"International Transactions on Electrical Energy Systems","volume":"2025 1","pages":""},"PeriodicalIF":1.9000,"publicationDate":"2025-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1155/etep/3574030","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Transactions on Electrical Energy Systems","FirstCategoryId":"5","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1155/etep/3574030","RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0

Abstract

This paper proposes a multiagent reinforcement learning (MARL) approach to optimize energy management in a grid-connected microgrid (MG). Renewable energy resources (RES) and customers are modeled as autonomous agents using reinforcement learning (RL) to interact with their environment. Agents are unaware of the actions or presence of others, which ensures privacy. Each agent aims to maximize its expected rewards individually. A double auction (DA) algorithm determines the price of the internal market. After market clearing, any unmet loads or excess energy are exchanged with the main grid. The New Energy Lab (NEL) at Staffordshire University is used as a case study, including wind turbines (WTs), photovoltaic (PV) panels, a fuel cell (FC), a battery, and various loads. We introduce a model-free Q-learning (QL) algorithm for managing energy in the NEL. Agents explore the environment, evaluate state-action pairs, and operate in a decentralized manner during training and implementation. The algorithm selects actions that maximize long-term value. To fairly consider the algorithms for both customers and producers, a fairness factor criterion is used. QL achieves a fairness factor of 1.2643, compared to 1.2358 for MC. It also has a shorter training time of 1483 compared with 1879.74 for MC and requires less memory, making it more efficient.

Abstract Image

基于强化学习的多智能体能源管理系统设计:新能源实验室训练集案例研究
提出了一种多智能体强化学习(MARL)方法来优化并网微电网(MG)的能量管理。可再生能源(RES)和客户被建模为使用强化学习(RL)与环境交互的自主代理。代理不知道其他人的行为或存在,这确保了隐私。每个智能体的目标都是最大化自己的预期回报。双拍卖(DA)算法决定内部市场的价格。在市场出清后,任何未满足的负荷或多余的能量与主电网交换。斯塔福德郡大学的新能源实验室(NEL)被用作案例研究,包括风力涡轮机(WTs)、光伏(PV)面板、燃料电池(FC)、电池和各种负载。我们引入了一种无模型q -学习(QL)算法来管理NEL中的能量。智能体探索环境,评估状态-动作对,并在训练和实施期间以分散的方式操作。该算法选择使长期价值最大化的行动。为了公平地考虑消费者和生产者的算法,使用了一个公平因子准则。QL的公平性系数为1.2643,而MC的公平性系数为1.2358。QL的训练时间为1483,而MC的训练时间为1879.74,并且需要的内存更少,效率更高。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
International Transactions on Electrical Energy Systems
International Transactions on Electrical Energy Systems ENGINEERING, ELECTRICAL & ELECTRONIC-
CiteScore
6.70
自引率
8.70%
发文量
342
期刊介绍: International Transactions on Electrical Energy Systems publishes original research results on key advances in the generation, transmission, and distribution of electrical energy systems. Of particular interest are submissions concerning the modeling, analysis, optimization and control of advanced electric power systems. Manuscripts on topics of economics, finance, policies, insulation materials, low-voltage power electronics, plasmas, and magnetics will generally not be considered for review.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信