Ahmed M. Taher , Shady H.E. Abdel Aleem , Saad F. Al-Gahtani , Ziad M. Ali , Hany M. Hasanien
{"title":"基于改进深度强化学习的带软开点、存储单元和电动汽车的有源配电系统频率调节","authors":"Ahmed M. Taher , Shady H.E. Abdel Aleem , Saad F. Al-Gahtani , Ziad M. Ali , Hany M. Hasanien","doi":"10.1016/j.renene.2025.124537","DOIUrl":null,"url":null,"abstract":"<div><div>As an effective approach to achieving the smart grid concept and reducing carbon emissions, the integration of renewable energy sources and storage devices is increasing. However, with the growing demand for high-power charging, the electrical grid faces significant challenges as electric vehicle (EV) adoption rises, particularly in the presence of stochastic energy sources. Consequently, the need for robust regulation strategies to manage distribution system uncertainties, especially in frequency regulation, is becoming more critical. Distribution systems, interconnected through multi-terminal soft open points (SOPs), are evolving into highly controllable, integrated, and flexible architectures. Performance is further enhanced by incorporating a dedicated terminal for hybrid hydrogen energy storage. Additionally, the integration of vehicle-to-grid (V2G) and grid-to-vehicle (G2V) operations has been explored. To effectively manage these operational frameworks, a modified deep reinforcement learning (RL) strategy based on the deep deterministic policy gradient (DDPG) algorithm is proposed. Multi-agent deep RL is employed, generating multiple control signals per agent based on reward functions derived from a quadratic optimization function within the model predictive control (MPC) framework. To ensure an optimal control action waveform and enhance system performance, each DDPG deep RL agent's control action value is scaled by its observation value, integral, and derivative, integrated through a filter element. When applying the proposed modified deep RL strategy alongside the components, the rate of frequency change and power transfer fluctuations achieved minimal steady-state errors in the range of × 10<sup>−8</sup>, with significantly damped overshoot and undershoot levels. This approach effectively maintains system performance, outperforming other simulated scenarios.</div></div>","PeriodicalId":419,"journal":{"name":"Renewable Energy","volume":"256 ","pages":"Article 124537"},"PeriodicalIF":9.1000,"publicationDate":"2025-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Modified deep reinforcement learning for frequency regulation in active distribution systems with soft open points, storage units and electric vehicles\",\"authors\":\"Ahmed M. Taher , Shady H.E. Abdel Aleem , Saad F. Al-Gahtani , Ziad M. Ali , Hany M. Hasanien\",\"doi\":\"10.1016/j.renene.2025.124537\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>As an effective approach to achieving the smart grid concept and reducing carbon emissions, the integration of renewable energy sources and storage devices is increasing. However, with the growing demand for high-power charging, the electrical grid faces significant challenges as electric vehicle (EV) adoption rises, particularly in the presence of stochastic energy sources. Consequently, the need for robust regulation strategies to manage distribution system uncertainties, especially in frequency regulation, is becoming more critical. Distribution systems, interconnected through multi-terminal soft open points (SOPs), are evolving into highly controllable, integrated, and flexible architectures. Performance is further enhanced by incorporating a dedicated terminal for hybrid hydrogen energy storage. Additionally, the integration of vehicle-to-grid (V2G) and grid-to-vehicle (G2V) operations has been explored. To effectively manage these operational frameworks, a modified deep reinforcement learning (RL) strategy based on the deep deterministic policy gradient (DDPG) algorithm is proposed. Multi-agent deep RL is employed, generating multiple control signals per agent based on reward functions derived from a quadratic optimization function within the model predictive control (MPC) framework. To ensure an optimal control action waveform and enhance system performance, each DDPG deep RL agent's control action value is scaled by its observation value, integral, and derivative, integrated through a filter element. When applying the proposed modified deep RL strategy alongside the components, the rate of frequency change and power transfer fluctuations achieved minimal steady-state errors in the range of × 10<sup>−8</sup>, with significantly damped overshoot and undershoot levels. This approach effectively maintains system performance, outperforming other simulated scenarios.</div></div>\",\"PeriodicalId\":419,\"journal\":{\"name\":\"Renewable Energy\",\"volume\":\"256 \",\"pages\":\"Article 124537\"},\"PeriodicalIF\":9.1000,\"publicationDate\":\"2025-09-30\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Renewable Energy\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0960148125022013\",\"RegionNum\":1,\"RegionCategory\":\"工程技术\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"ENERGY & FUELS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Renewable Energy","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0960148125022013","RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENERGY & FUELS","Score":null,"Total":0}
Modified deep reinforcement learning for frequency regulation in active distribution systems with soft open points, storage units and electric vehicles
As an effective approach to achieving the smart grid concept and reducing carbon emissions, the integration of renewable energy sources and storage devices is increasing. However, with the growing demand for high-power charging, the electrical grid faces significant challenges as electric vehicle (EV) adoption rises, particularly in the presence of stochastic energy sources. Consequently, the need for robust regulation strategies to manage distribution system uncertainties, especially in frequency regulation, is becoming more critical. Distribution systems, interconnected through multi-terminal soft open points (SOPs), are evolving into highly controllable, integrated, and flexible architectures. Performance is further enhanced by incorporating a dedicated terminal for hybrid hydrogen energy storage. Additionally, the integration of vehicle-to-grid (V2G) and grid-to-vehicle (G2V) operations has been explored. To effectively manage these operational frameworks, a modified deep reinforcement learning (RL) strategy based on the deep deterministic policy gradient (DDPG) algorithm is proposed. Multi-agent deep RL is employed, generating multiple control signals per agent based on reward functions derived from a quadratic optimization function within the model predictive control (MPC) framework. To ensure an optimal control action waveform and enhance system performance, each DDPG deep RL agent's control action value is scaled by its observation value, integral, and derivative, integrated through a filter element. When applying the proposed modified deep RL strategy alongside the components, the rate of frequency change and power transfer fluctuations achieved minimal steady-state errors in the range of × 10−8, with significantly damped overshoot and undershoot levels. This approach effectively maintains system performance, outperforming other simulated scenarios.
期刊介绍:
Renewable Energy journal is dedicated to advancing knowledge and disseminating insights on various topics and technologies within renewable energy systems and components. Our mission is to support researchers, engineers, economists, manufacturers, NGOs, associations, and societies in staying updated on new developments in their respective fields and applying alternative energy solutions to current practices.
As an international, multidisciplinary journal in renewable energy engineering and research, we strive to be a premier peer-reviewed platform and a trusted source of original research and reviews in the field of renewable energy. Join us in our endeavor to drive innovation and progress in sustainable energy solutions.