{"title":"Data Driven Real-Time Dynamic Voltage Control Using Decentralized Execution Multi-Agent Deep Reinforcement Learning","authors":"Yuling Wang;Vijay Vittal","doi":"10.1109/OAJPE.2024.3459002","DOIUrl":null,"url":null,"abstract":"In recent years, there has been an increasing need for effective voltage control methods in power systems due to the growing complexity and dynamic nature of practical power grid operations. To enhance the controller’s resilience in addressing communication failures, a dynamic voltage control method employing distributed execution multi-agent deep reinforcement learning(DRL) is proposed. The proposed method follows a centralized training and decentralized execution based approach. Each agent has independent actor neural networks to output generator control commands and critic neural networks that evaluate command performance. Detailed dynamic models are integrated for agent training to effectively capture the system’s dynamic behavior following disturbances. Subsequent to training, each agent possesses the capability to autonomously generate control commands utilizing only local information. Simulation outcomes underscore the efficacy of the distributed execution multi-agent DRL controller, showcasing its capability in not only providing voltage support but also effectively handling communication failures among agents.","PeriodicalId":56187,"journal":{"name":"IEEE Open Access Journal of Power and Energy","volume":null,"pages":null},"PeriodicalIF":3.3000,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10679222","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Open Access Journal of Power and Energy","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10679222/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"ENERGY & FUELS","Score":null,"Total":0}
引用次数: 0
Abstract
In recent years, there has been an increasing need for effective voltage control methods in power systems due to the growing complexity and dynamic nature of practical power grid operations. To enhance the controller’s resilience in addressing communication failures, a dynamic voltage control method employing distributed execution multi-agent deep reinforcement learning(DRL) is proposed. The proposed method follows a centralized training and decentralized execution based approach. Each agent has independent actor neural networks to output generator control commands and critic neural networks that evaluate command performance. Detailed dynamic models are integrated for agent training to effectively capture the system’s dynamic behavior following disturbances. Subsequent to training, each agent possesses the capability to autonomously generate control commands utilizing only local information. Simulation outcomes underscore the efficacy of the distributed execution multi-agent DRL controller, showcasing its capability in not only providing voltage support but also effectively handling communication failures among agents.