{"title":"Cooperative Optimization Strategy for Distributed Energy Resource System using Multi-Agent Reinforcement Learning","authors":"Zhaoyang Liu, Tianchun Xiang, Tianhao Wang, C. Mu","doi":"10.1109/SSCI50451.2021.9659540","DOIUrl":null,"url":null,"abstract":"In this paper, a consensus multi-agent deep reinforcement learning algorithm is introduced for distributed cooperative secondary voltage control of microgrids. To reduce dependence on the system model and enhance communication efficiency, we propose a fully decentralized multi-agent advantage actor critic (A2C) algorithm with local communication networks, which considers each distributed energy resource (DER) as an agent. Both local state and the messages received from neighbors are employed by each agent to learn a control strategy. Moreover, the maximum entropy reinforcement learning framework is applied to improve exploration of agents. The proposed algorithm is verified in two different scale microgrid setups, which are microgrid-6 and microgrid-20. Experiment results show the effectiveness and superiority of our proposed algorithm.","PeriodicalId":255763,"journal":{"name":"2021 IEEE Symposium Series on Computational Intelligence (SSCI)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE Symposium Series on Computational Intelligence (SSCI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SSCI50451.2021.9659540","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
In this paper, a consensus multi-agent deep reinforcement learning algorithm is introduced for distributed cooperative secondary voltage control of microgrids. To reduce dependence on the system model and enhance communication efficiency, we propose a fully decentralized multi-agent advantage actor critic (A2C) algorithm with local communication networks, which considers each distributed energy resource (DER) as an agent. Both local state and the messages received from neighbors are employed by each agent to learn a control strategy. Moreover, the maximum entropy reinforcement learning framework is applied to improve exploration of agents. The proposed algorithm is verified in two different scale microgrid setups, which are microgrid-6 and microgrid-20. Experiment results show the effectiveness and superiority of our proposed algorithm.