{"title":"A tri-generation system based Micro-Grid Energy management: A deep reinforcement learning Approach","authors":"Hasan Saeed Qazi, Nian Liu, Tong Wang, Arsalan Masood, Babar Sattar","doi":"10.1109/CEECT50755.2020.9298589","DOIUrl":null,"url":null,"abstract":"The Combined cooling, heating and power (CCHP) systems based Micro-Grid (MG) provide a substitute to coup the energy concern issue such as energy scarcity, secure energy transmission and distribution, flue gas outpouring control, and economic stabilization and efficiency of power system. The fluctuation of renewable energy sources (RS) and multiple load demands, i.e. Electrical, Thermal and cooling, challenges the CCHP based MG efficient economic operation. For diverse operating situations adaptability and to enhance the reliability and economic performance, the deep reinforcement learning (DRL) is proposed for CCHP based MG in this article. To reduce the operational cost (OC) and improve the energy utilization the MG model is presented on base of Markov Decision Process (MDP). For further enhancement and applicability to energy management concern of MG an improve DRL algorithm called Distributed proximal policy optimization (DPPO) is introduce. To find the optimal policies, the MG operator (agent) will be trained for diverse operating situation for the efficient response to emergency conditions. Simulations are carried out and the merits of proposed model is presented in results.","PeriodicalId":115174,"journal":{"name":"2020 International Conference on Electrical Engineering and Control Technologies (CEECT)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 International Conference on Electrical Engineering and Control Technologies (CEECT)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CEECT50755.2020.9298589","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2
Abstract
The Combined cooling, heating and power (CCHP) systems based Micro-Grid (MG) provide a substitute to coup the energy concern issue such as energy scarcity, secure energy transmission and distribution, flue gas outpouring control, and economic stabilization and efficiency of power system. The fluctuation of renewable energy sources (RS) and multiple load demands, i.e. Electrical, Thermal and cooling, challenges the CCHP based MG efficient economic operation. For diverse operating situations adaptability and to enhance the reliability and economic performance, the deep reinforcement learning (DRL) is proposed for CCHP based MG in this article. To reduce the operational cost (OC) and improve the energy utilization the MG model is presented on base of Markov Decision Process (MDP). For further enhancement and applicability to energy management concern of MG an improve DRL algorithm called Distributed proximal policy optimization (DPPO) is introduce. To find the optimal policies, the MG operator (agent) will be trained for diverse operating situation for the efficient response to emergency conditions. Simulations are carried out and the merits of proposed model is presented in results.