Reinforcement Learning-Based Adaptation of Grid Following Inverter's Internal Controller to Networked Microgrids' Strengths

IF 2.7 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC
IET Smart Grid Pub Date : 2025-10-17 DOI:10.1049/stg2.70039
Thanh Long Vu, Monish Mukherjee, Ankit Singhal, Kevin Schneider, Wei Du, Nikolai Drigal, Francis Tuffner, Jing Xie
{"title":"Reinforcement Learning-Based Adaptation of Grid Following Inverter's Internal Controller to Networked Microgrids' Strengths","authors":"Thanh Long Vu,&nbsp;Monish Mukherjee,&nbsp;Ankit Singhal,&nbsp;Kevin Schneider,&nbsp;Wei Du,&nbsp;Nikolai Drigal,&nbsp;Francis Tuffner,&nbsp;Jing Xie","doi":"10.1049/stg2.70039","DOIUrl":null,"url":null,"abstract":"<p>The varying topological configurations, generator commitments and dispatches, and dynamic load demand lead to changing system's strengths during the operations of networked microgrids. When the system's strengths significantly change, the fixed control gains at large devices may result in unsatisfactory system performance; this necessitates the tuning of the control gains at large devices to adapt to the changing system's strengths. In this paper, observer-based reinforcement learning (RL) is utilised to automatically tune the proportional-integral (PI) gains of phase lock loop (PLL) controller of grid-following (GFL) inverters to adapt to the changing strengths of microgrids and networked microgrids. The RL agent in this framework augments an observer predicting system's strengths, from which the RL control policy will adjust accordingly to tune the PLL controller's gains towards the system's strengths. Also, to enhance the control performance, the recently introduced Barrier function-based RL framework is leveraged for the design of reward function to prevent the high frequency nadir. An operational 26 kV electric distribution system, which is modelled as networked microgrids, is used to illustrate the need and effectiveness of the proposed RL-tuned control.</p>","PeriodicalId":36490,"journal":{"name":"IET Smart Grid","volume":"8 1","pages":""},"PeriodicalIF":2.7000,"publicationDate":"2025-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/stg2.70039","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IET Smart Grid","FirstCategoryId":"1085","ListUrlMain":"https://ietresearch.onlinelibrary.wiley.com/doi/10.1049/stg2.70039","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0

Abstract

The varying topological configurations, generator commitments and dispatches, and dynamic load demand lead to changing system's strengths during the operations of networked microgrids. When the system's strengths significantly change, the fixed control gains at large devices may result in unsatisfactory system performance; this necessitates the tuning of the control gains at large devices to adapt to the changing system's strengths. In this paper, observer-based reinforcement learning (RL) is utilised to automatically tune the proportional-integral (PI) gains of phase lock loop (PLL) controller of grid-following (GFL) inverters to adapt to the changing strengths of microgrids and networked microgrids. The RL agent in this framework augments an observer predicting system's strengths, from which the RL control policy will adjust accordingly to tune the PLL controller's gains towards the system's strengths. Also, to enhance the control performance, the recently introduced Barrier function-based RL framework is leveraged for the design of reward function to prevent the high frequency nadir. An operational 26 kV electric distribution system, which is modelled as networked microgrids, is used to illustrate the need and effectiveness of the proposed RL-tuned control.

Abstract Image

基于强化学习的电网跟随逆变器内控制器对网络化微电网强度的自适应
在网络化微电网运行过程中,不同的拓扑结构、发电机的承诺和调度以及动态负载需求导致了系统优势的变化。当系统强度发生显著变化时,大型设备上固定的控制增益可能导致系统性能不理想;这就需要对大型设备的控制增益进行调整,以适应不断变化的系统强度。本文利用基于观测器的强化学习(RL)来自动调整电网跟随器(GFL)逆变器锁相环(PLL)控制器的比例积分(PI)增益,以适应微电网和网络化微电网强度的变化。该框架中的RL代理增加了一个预测系统强度的观测器,RL控制策略将根据该观测器进行相应调整,以调整PLL控制器的增益以适应系统的强度。此外,为了提高控制性能,利用最近引入的基于Barrier函数的RL框架来设计奖励函数,以防止高频最低点。一个运行的26千伏配电系统被建模为网络微电网,用来说明所提出的rl调谐控制的必要性和有效性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
IET Smart Grid
IET Smart Grid Computer Science-Computer Networks and Communications
CiteScore
6.70
自引率
4.30%
发文量
41
审稿时长
29 weeks
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信