Jiakai Gong;Nuo Yu;Fen Han;Bin Tang;Haolong Wu;Yuan Ge
{"title":"基于部分可观测马尔可夫博弈的微电网能源调度优化","authors":"Jiakai Gong;Nuo Yu;Fen Han;Bin Tang;Haolong Wu;Yuan Ge","doi":"10.1109/TAI.2024.3428510","DOIUrl":null,"url":null,"abstract":"Microgrids (MGs) are essential for enhancing energy efficiency and minimizing power usage through the regulation of energy storage systems. Nevertheless, privacy-related concerns obstruct the real-time precise regulation of these systems due to unavailable state-of-charge (SOC) data. This article introduces a self-adaptive energy scheduling optimization framework for MGs that operates without SOC information, utilizing a partially observable Markov game (POMG) to decrease energy usage. Furthermore, to develop an optimal energy scheduling strategy, a MG system optimization approach using recurrent multiagent deep deterministic policy gradient (RMADDPG) is presented. This method is evaluated against other existing techniques such as MADDPG, deterministic recurrent policy gradient (DRPG), and independent Q-learning (IQL), demonstrating reductions in electrical energy consumption by 4.29%, 5.56%, and 12.95%, respectively, according to simulation outcomes.","PeriodicalId":73305,"journal":{"name":"IEEE transactions on artificial intelligence","volume":"5 11","pages":"5371-5380"},"PeriodicalIF":0.0000,"publicationDate":"2024-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Energy Scheduling Optimization for Microgrids Based on Partially Observable Markov Game\",\"authors\":\"Jiakai Gong;Nuo Yu;Fen Han;Bin Tang;Haolong Wu;Yuan Ge\",\"doi\":\"10.1109/TAI.2024.3428510\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Microgrids (MGs) are essential for enhancing energy efficiency and minimizing power usage through the regulation of energy storage systems. Nevertheless, privacy-related concerns obstruct the real-time precise regulation of these systems due to unavailable state-of-charge (SOC) data. This article introduces a self-adaptive energy scheduling optimization framework for MGs that operates without SOC information, utilizing a partially observable Markov game (POMG) to decrease energy usage. Furthermore, to develop an optimal energy scheduling strategy, a MG system optimization approach using recurrent multiagent deep deterministic policy gradient (RMADDPG) is presented. This method is evaluated against other existing techniques such as MADDPG, deterministic recurrent policy gradient (DRPG), and independent Q-learning (IQL), demonstrating reductions in electrical energy consumption by 4.29%, 5.56%, and 12.95%, respectively, according to simulation outcomes.\",\"PeriodicalId\":73305,\"journal\":{\"name\":\"IEEE transactions on artificial intelligence\",\"volume\":\"5 11\",\"pages\":\"5371-5380\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-07-16\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE transactions on artificial intelligence\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10599942/\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on artificial intelligence","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10599942/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Energy Scheduling Optimization for Microgrids Based on Partially Observable Markov Game
Microgrids (MGs) are essential for enhancing energy efficiency and minimizing power usage through the regulation of energy storage systems. Nevertheless, privacy-related concerns obstruct the real-time precise regulation of these systems due to unavailable state-of-charge (SOC) data. This article introduces a self-adaptive energy scheduling optimization framework for MGs that operates without SOC information, utilizing a partially observable Markov game (POMG) to decrease energy usage. Furthermore, to develop an optimal energy scheduling strategy, a MG system optimization approach using recurrent multiagent deep deterministic policy gradient (RMADDPG) is presented. This method is evaluated against other existing techniques such as MADDPG, deterministic recurrent policy gradient (DRPG), and independent Q-learning (IQL), demonstrating reductions in electrical energy consumption by 4.29%, 5.56%, and 12.95%, respectively, according to simulation outcomes.