{"title":"多能量社区的能量管理:层次miqp约束的深度强化学习方法","authors":"Ahmed Shaban Omar;Ramadan El-Shatshat","doi":"10.1109/TSTE.2025.3550563","DOIUrl":null,"url":null,"abstract":"This paper proposes a hybrid mixed-integer quadratic programming-constrained deep reinforcement learning (MIQP-CDRL) framework for energy management of multi-energy communities. The framework employs a hierarchical two-layer structure: the MIQP layer handles day-ahead scheduling, minimizing operational costs while ensuring system constraint satisfaction, while the CDRL agent makes real-time adjustments. The goal of this framework is to combine the strengths of CDRL in addressing sequential decision-making problems in stochastic systems with the advantages of a mathematical programming model to guide the agent's exploration during the training and reduce the dependency on opaque policies during real-time operation. The system dynamics are modeled as a constrained Markov decision process (CMDP), which is solved by a model-free CDRL agent built upon the constrained policy optimization (CPO) algorithm. Practical test results demonstrate the effectiveness of this framework in improving the optimality and feasibility of the real-time solutions compared to existing stand-alone DRL approaches.","PeriodicalId":452,"journal":{"name":"IEEE Transactions on Sustainable Energy","volume":"16 3","pages":"2236-2250"},"PeriodicalIF":10.0000,"publicationDate":"2025-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Energy Management of Multi-Energy Communities: A Hierarchical MIQP-Constrained Deep Reinforcement Learning Approach\",\"authors\":\"Ahmed Shaban Omar;Ramadan El-Shatshat\",\"doi\":\"10.1109/TSTE.2025.3550563\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"This paper proposes a hybrid mixed-integer quadratic programming-constrained deep reinforcement learning (MIQP-CDRL) framework for energy management of multi-energy communities. The framework employs a hierarchical two-layer structure: the MIQP layer handles day-ahead scheduling, minimizing operational costs while ensuring system constraint satisfaction, while the CDRL agent makes real-time adjustments. The goal of this framework is to combine the strengths of CDRL in addressing sequential decision-making problems in stochastic systems with the advantages of a mathematical programming model to guide the agent's exploration during the training and reduce the dependency on opaque policies during real-time operation. The system dynamics are modeled as a constrained Markov decision process (CMDP), which is solved by a model-free CDRL agent built upon the constrained policy optimization (CPO) algorithm. Practical test results demonstrate the effectiveness of this framework in improving the optimality and feasibility of the real-time solutions compared to existing stand-alone DRL approaches.\",\"PeriodicalId\":452,\"journal\":{\"name\":\"IEEE Transactions on Sustainable Energy\",\"volume\":\"16 3\",\"pages\":\"2236-2250\"},\"PeriodicalIF\":10.0000,\"publicationDate\":\"2025-03-12\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Transactions on Sustainable Energy\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10923740/\",\"RegionNum\":1,\"RegionCategory\":\"工程技术\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"ENERGY & FUELS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Sustainable Energy","FirstCategoryId":"5","ListUrlMain":"https://ieeexplore.ieee.org/document/10923740/","RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENERGY & FUELS","Score":null,"Total":0}
Energy Management of Multi-Energy Communities: A Hierarchical MIQP-Constrained Deep Reinforcement Learning Approach
This paper proposes a hybrid mixed-integer quadratic programming-constrained deep reinforcement learning (MIQP-CDRL) framework for energy management of multi-energy communities. The framework employs a hierarchical two-layer structure: the MIQP layer handles day-ahead scheduling, minimizing operational costs while ensuring system constraint satisfaction, while the CDRL agent makes real-time adjustments. The goal of this framework is to combine the strengths of CDRL in addressing sequential decision-making problems in stochastic systems with the advantages of a mathematical programming model to guide the agent's exploration during the training and reduce the dependency on opaque policies during real-time operation. The system dynamics are modeled as a constrained Markov decision process (CMDP), which is solved by a model-free CDRL agent built upon the constrained policy optimization (CPO) algorithm. Practical test results demonstrate the effectiveness of this framework in improving the optimality and feasibility of the real-time solutions compared to existing stand-alone DRL approaches.
期刊介绍:
The IEEE Transactions on Sustainable Energy serves as a pivotal platform for sharing groundbreaking research findings on sustainable energy systems, with a focus on their seamless integration into power transmission and/or distribution grids. The journal showcases original research spanning the design, implementation, grid-integration, and control of sustainable energy technologies and systems. Additionally, the Transactions warmly welcomes manuscripts addressing the design, implementation, and evaluation of power systems influenced by sustainable energy systems and devices.