A Deep Reinforcement Learning Method for Economic Power Dispatch of Microgrid in OPAL-RT Environment

F. Lin, Chao-Fu Chang, Yu-Cheng Huang, Tzu-Ming Su
{"title":"A Deep Reinforcement Learning Method for Economic Power Dispatch of Microgrid in OPAL-RT Environment","authors":"F. Lin, Chao-Fu Chang, Yu-Cheng Huang, Tzu-Ming Su","doi":"10.3390/technologies11040096","DOIUrl":null,"url":null,"abstract":"This paper focuses on the economic power dispatch (EPD) operation of a microgrid in an OPAL-RT environment. First, a long short-term memory (LSTM) network is proposed to forecast the load information of a microgrid to determine the output of a power generator and the charging/discharging control strategy of a battery energy storage system (BESS). Then, a deep reinforcement learning method, the deep deterministic policy gradient (DDPG), is utilized to develop the power dispatch of a microgrid to minimize the total energy expense while considering power constraints, load uncertainties and electricity price. Moreover, a microgrid built in Cimei Island of Penghu Archipelago, Taiwan, is investigated to examine the compliance with the requirements of equality and inequality constraints and the performance of the deep reinforcement learning method. Furthermore, a comparison of the proposed method with the experience-based energy management system (EMS), Newton particle swarm optimization (Newton-PSO) and the deep Q-learning network (DQN) is provided to evaluate the obtained solutions. In this study, the average deviation of the LSTM forecast accuracy is less than 5%. In addition, the daily operating cost of the proposed method obtains a 3.8% to 7.4% lower electricity cost compared to that of the other methods. Finally, a detailed emulation in the OPAL-RT environment is carried out to validate the effectiveness of the proposed method.","PeriodicalId":22341,"journal":{"name":"Technologies","volume":"57 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2023-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Technologies","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.3390/technologies11040096","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

This paper focuses on the economic power dispatch (EPD) operation of a microgrid in an OPAL-RT environment. First, a long short-term memory (LSTM) network is proposed to forecast the load information of a microgrid to determine the output of a power generator and the charging/discharging control strategy of a battery energy storage system (BESS). Then, a deep reinforcement learning method, the deep deterministic policy gradient (DDPG), is utilized to develop the power dispatch of a microgrid to minimize the total energy expense while considering power constraints, load uncertainties and electricity price. Moreover, a microgrid built in Cimei Island of Penghu Archipelago, Taiwan, is investigated to examine the compliance with the requirements of equality and inequality constraints and the performance of the deep reinforcement learning method. Furthermore, a comparison of the proposed method with the experience-based energy management system (EMS), Newton particle swarm optimization (Newton-PSO) and the deep Q-learning network (DQN) is provided to evaluate the obtained solutions. In this study, the average deviation of the LSTM forecast accuracy is less than 5%. In addition, the daily operating cost of the proposed method obtains a 3.8% to 7.4% lower electricity cost compared to that of the other methods. Finally, a detailed emulation in the OPAL-RT environment is carried out to validate the effectiveness of the proposed method.
OPAL-RT环境下微电网经济调度的深度强化学习方法
本文研究了OPAL-RT环境下微电网的经济电力调度问题。首先,提出了一个长短期记忆(LSTM)网络来预测微电网的负荷信息,以确定发电机的输出和电池储能系统(BESS)的充放电控制策略。然后,利用深度强化学习方法——深度确定性策略梯度(deep deterministic policy gradient, DDPG),在考虑电力约束、负荷不确定性和电价约束的情况下,制定微电网的电力调度方案,使总能源费用最小化。此外,以台湾澎湖群岛磁美岛的微电网为例,考察了深度强化学习方法是否满足等式和不等式约束的要求,以及深度强化学习方法的性能。此外,将该方法与基于经验的能量管理系统(EMS)、牛顿粒子群优化(Newton- pso)和深度q -学习网络(DQN)进行了比较,对所得到的解进行了评价。在本研究中,LSTM预测精度的平均偏差小于5%。此外,与其他方法相比,该方法的日运行成本降低3.8% ~ 7.4%。最后,在OPAL-RT环境下进行了详细的仿真,验证了所提方法的有效性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信