{"title":"基于强化学习的多微电网多智能体能量管理策略","authors":"Mohammad Safayet Hossain, Chinwendu Enyioha","doi":"10.1109/TPEC56611.2023.10078538","DOIUrl":null,"url":null,"abstract":"In this paper, an intelligent energy management framework is proposed to operate multiple grid-connected microgrids (MGs) using a cooperative control strategy. Each MG incorporates an intelligent agent, distributed energy resources (DERs), and residential loads. Multiple agents cooperate to optimize the control inputs using communication link-based state observation. The MGs are connected with two utility grids to participate in the electricity market through the power exchange using a real-time price signal. The secondary link supplies power to the MGs with a higher tariff when the primary grid link goes offline. Reinforcement learning (RL) is explored to build an intelligent energy management system (EMS) where the proximal policy optimization (PPO) algorithm is utilized. It is verified that the proposed energy management strategy reduces the operational cost of MGs significantly by exploiting a nearoptimal real-time scheduling policy. Moreover, the trained agents optimize the operation of DERs during the primary grid link offline scenario to compensate for the higher tariff.","PeriodicalId":183284,"journal":{"name":"2023 IEEE Texas Power and Energy Conference (TPEC)","volume":"77 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-02-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Multi-Agent Energy Management Strategy for Multi-Microgrids Using Reinforcement Learning\",\"authors\":\"Mohammad Safayet Hossain, Chinwendu Enyioha\",\"doi\":\"10.1109/TPEC56611.2023.10078538\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In this paper, an intelligent energy management framework is proposed to operate multiple grid-connected microgrids (MGs) using a cooperative control strategy. Each MG incorporates an intelligent agent, distributed energy resources (DERs), and residential loads. Multiple agents cooperate to optimize the control inputs using communication link-based state observation. The MGs are connected with two utility grids to participate in the electricity market through the power exchange using a real-time price signal. The secondary link supplies power to the MGs with a higher tariff when the primary grid link goes offline. Reinforcement learning (RL) is explored to build an intelligent energy management system (EMS) where the proximal policy optimization (PPO) algorithm is utilized. It is verified that the proposed energy management strategy reduces the operational cost of MGs significantly by exploiting a nearoptimal real-time scheduling policy. Moreover, the trained agents optimize the operation of DERs during the primary grid link offline scenario to compensate for the higher tariff.\",\"PeriodicalId\":183284,\"journal\":{\"name\":\"2023 IEEE Texas Power and Energy Conference (TPEC)\",\"volume\":\"77 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-02-13\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2023 IEEE Texas Power and Energy Conference (TPEC)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/TPEC56611.2023.10078538\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 IEEE Texas Power and Energy Conference (TPEC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/TPEC56611.2023.10078538","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Multi-Agent Energy Management Strategy for Multi-Microgrids Using Reinforcement Learning
In this paper, an intelligent energy management framework is proposed to operate multiple grid-connected microgrids (MGs) using a cooperative control strategy. Each MG incorporates an intelligent agent, distributed energy resources (DERs), and residential loads. Multiple agents cooperate to optimize the control inputs using communication link-based state observation. The MGs are connected with two utility grids to participate in the electricity market through the power exchange using a real-time price signal. The secondary link supplies power to the MGs with a higher tariff when the primary grid link goes offline. Reinforcement learning (RL) is explored to build an intelligent energy management system (EMS) where the proximal policy optimization (PPO) algorithm is utilized. It is verified that the proposed energy management strategy reduces the operational cost of MGs significantly by exploiting a nearoptimal real-time scheduling policy. Moreover, the trained agents optimize the operation of DERs during the primary grid link offline scenario to compensate for the higher tariff.