{"title":"Multiagent Energy Management System Design Using Reinforcement Learning: The New Energy Lab Training Set Case Study","authors":"Parisa Mohammadi, Razieh Darshi, Hamidreza Gohari Darabkhani, Saeed Shamaghdari","doi":"10.1155/etep/3574030","DOIUrl":null,"url":null,"abstract":"<div>\n <p>This paper proposes a multiagent reinforcement learning (MARL) approach to optimize energy management in a grid-connected microgrid (MG). Renewable energy resources (RES) and customers are modeled as autonomous agents using reinforcement learning (RL) to interact with their environment. Agents are unaware of the actions or presence of others, which ensures privacy. Each agent aims to maximize its expected rewards individually. A double auction (DA) algorithm determines the price of the internal market. After market clearing, any unmet loads or excess energy are exchanged with the main grid. The New Energy Lab (NEL) at Staffordshire University is used as a case study, including wind turbines (WTs), photovoltaic (PV) panels, a fuel cell (FC), a battery, and various loads. We introduce a model-free Q-learning (QL) algorithm for managing energy in the NEL. Agents explore the environment, evaluate state-action pairs, and operate in a decentralized manner during training and implementation. The algorithm selects actions that maximize long-term value. To fairly consider the algorithms for both customers and producers, a fairness factor criterion is used. QL achieves a fairness factor of 1.2643, compared to 1.2358 for MC. It also has a shorter training time of 1483 compared with 1879.74 for MC and requires less memory, making it more efficient.</p>\n </div>","PeriodicalId":51293,"journal":{"name":"International Transactions on Electrical Energy Systems","volume":"2025 1","pages":""},"PeriodicalIF":1.9000,"publicationDate":"2025-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1155/etep/3574030","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Transactions on Electrical Energy Systems","FirstCategoryId":"5","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1155/etep/3574030","RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0
Abstract
This paper proposes a multiagent reinforcement learning (MARL) approach to optimize energy management in a grid-connected microgrid (MG). Renewable energy resources (RES) and customers are modeled as autonomous agents using reinforcement learning (RL) to interact with their environment. Agents are unaware of the actions or presence of others, which ensures privacy. Each agent aims to maximize its expected rewards individually. A double auction (DA) algorithm determines the price of the internal market. After market clearing, any unmet loads or excess energy are exchanged with the main grid. The New Energy Lab (NEL) at Staffordshire University is used as a case study, including wind turbines (WTs), photovoltaic (PV) panels, a fuel cell (FC), a battery, and various loads. We introduce a model-free Q-learning (QL) algorithm for managing energy in the NEL. Agents explore the environment, evaluate state-action pairs, and operate in a decentralized manner during training and implementation. The algorithm selects actions that maximize long-term value. To fairly consider the algorithms for both customers and producers, a fairness factor criterion is used. QL achieves a fairness factor of 1.2643, compared to 1.2358 for MC. It also has a shorter training time of 1483 compared with 1879.74 for MC and requires less memory, making it more efficient.
期刊介绍:
International Transactions on Electrical Energy Systems publishes original research results on key advances in the generation, transmission, and distribution of electrical energy systems. Of particular interest are submissions concerning the modeling, analysis, optimization and control of advanced electric power systems.
Manuscripts on topics of economics, finance, policies, insulation materials, low-voltage power electronics, plasmas, and magnetics will generally not be considered for review.