{"title":"Automatic generation control of is-landed micro-grid using integral reinforcement learning-based adaptive optimal control strategy","authors":"Rasananda Muduli, Debashisha Jena, Tukaram Moger","doi":"10.1007/s00202-024-02648-6","DOIUrl":null,"url":null,"abstract":"<h3 data-test=\"abstract-sub-heading\">Abstract</h3><p>Microgrids serve an essential role in the smart grid infrastructure, facilitating the seamless integration of distributed energy resources and supporting the increased adoption of renewable energy sources to satisfy the growing demand for sustainable energy solutions. This paper presents an application of integral reinforcement learning (IRL) algorithm-based adaptive optimal control strategy for automatic generation control of an is-landed micro-grid. This algorithm is a model-free actor-critic method that learns the critic parameters using the recursive least square method. The actor is straightforward and evaluates the action from the critic directly. The robustness of the proposed control technique is investigated under various uncertainties arising from parameter uncertainty, electric vehicle (EV) aggregator, and renewable energy sources. This study incorporates case studies and comparative analyses to demonstrate the control performance of the proposed control strategy. The effectiveness of the technique is evaluated by comparing it with deep Q-learning (DQN) control techniques and PI controllers. The proposed controller significantly improves performance metrics compared to the DQN and PI controllers. It reduces the peak frequency deviation by 6<span>\\(\\%\\)</span> and 14<span>\\(\\%\\)</span>, respectively, compared to the DQN and PI controllers. When subjected to multiple-step load disturbances, the proposed controller reduces the mean square error by 28<span>\\(\\%\\)</span> and 42<span>\\(\\%\\)</span>, respectively, while lowering both the integral absolute error and the integral time absolute error by 21<span>\\(\\%\\)</span> and 35<span>\\(\\%\\)</span> compared to the DQN and PI controllers. Additionally, when operating with renewable energy sources, the proposed controller decreases the standard deviation in the frequency deviation by 17<span>\\(\\%\\)</span> compared to the DQN controller and 23<span>\\(\\%\\)</span> compared to the PI controller.</p><h3 data-test=\"abstract-sub-heading\">Graphical abstract</h3>","PeriodicalId":50546,"journal":{"name":"Electrical Engineering","volume":"15 1","pages":""},"PeriodicalIF":1.6000,"publicationDate":"2024-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Electrical Engineering","FirstCategoryId":"5","ListUrlMain":"https://doi.org/10.1007/s00202-024-02648-6","RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0
Abstract
Microgrids serve an essential role in the smart grid infrastructure, facilitating the seamless integration of distributed energy resources and supporting the increased adoption of renewable energy sources to satisfy the growing demand for sustainable energy solutions. This paper presents an application of integral reinforcement learning (IRL) algorithm-based adaptive optimal control strategy for automatic generation control of an is-landed micro-grid. This algorithm is a model-free actor-critic method that learns the critic parameters using the recursive least square method. The actor is straightforward and evaluates the action from the critic directly. The robustness of the proposed control technique is investigated under various uncertainties arising from parameter uncertainty, electric vehicle (EV) aggregator, and renewable energy sources. This study incorporates case studies and comparative analyses to demonstrate the control performance of the proposed control strategy. The effectiveness of the technique is evaluated by comparing it with deep Q-learning (DQN) control techniques and PI controllers. The proposed controller significantly improves performance metrics compared to the DQN and PI controllers. It reduces the peak frequency deviation by 6\(\%\) and 14\(\%\), respectively, compared to the DQN and PI controllers. When subjected to multiple-step load disturbances, the proposed controller reduces the mean square error by 28\(\%\) and 42\(\%\), respectively, while lowering both the integral absolute error and the integral time absolute error by 21\(\%\) and 35\(\%\) compared to the DQN and PI controllers. Additionally, when operating with renewable energy sources, the proposed controller decreases the standard deviation in the frequency deviation by 17\(\%\) compared to the DQN controller and 23\(\%\) compared to the PI controller.
期刊介绍:
The journal “Electrical Engineering” following the long tradition of Archiv für Elektrotechnik publishes original papers of archival value in electrical engineering with a strong focus on electric power systems, smart grid approaches to power transmission and distribution, power system planning, operation and control, electricity markets, renewable power generation, microgrids, power electronics, electrical machines and drives, electric vehicles, railway electrification systems and electric transportation infrastructures, energy storage in electric power systems and vehicles, high voltage engineering, electromagnetic transients in power networks, lightning protection, electrical safety, electrical insulation systems, apparatus, devices, and components. Manuscripts describing theoretical, computer application and experimental research results are welcomed.
Electrical Engineering - Archiv für Elektrotechnik is published in agreement with Verband der Elektrotechnik Elektronik Informationstechnik eV (VDE).