Mohammad Seyfi, Saku Levikari, Mikko Nykyri, Behnam Mohammadi-Ivatloo, Samuli Honkapuro
{"title":"Deep Reinforcement Learning-Based Real-Time Controller for Energy-Efficient Buildings","authors":"Mohammad Seyfi, Saku Levikari, Mikko Nykyri, Behnam Mohammadi-Ivatloo, Samuli Honkapuro","doi":"10.1049/gtd2.70103","DOIUrl":null,"url":null,"abstract":"<p>Energy-efficient buildings play an important role in driving the transition toward more efficient and sustainable energy systems, especially in green local energy communities. However, incorporating such buildings into larger energy networks poses considerable difficulties, particularly in developing control systems that are both effective and efficient. In this paper, a deep reinforcement learning (DRL) approach is proposed to optimize the control of various decision variables within an integrated energy system including an energy-efficient building. The Markov decision process is formulated to maximize the building's profit from energy transactions with the electricity grid while simultaneously maintaining indoor temperatures at levels preferred by the occupants. To achieve this, the heating, ventilation, and air conditioning (HVAC) system is scheduled using the DRL method. Specifically, a soft actor-critic (SAC) agent is trained to manage an energy system including a real-case energy-efficient building in Lahti city of Finland with HVAC control system, an energy storage system, solar panels, and energy interactions with the grid. The results demonstrate the ability of the SAC agent to learn near-optimal decision-making strategies, increasing the economic performance of while ensuring thermal comfort for residents. This approach highlights the potential of DRL in enhancing both economic and environmental outcomes in energy building management.</p>","PeriodicalId":13261,"journal":{"name":"Iet Generation Transmission & Distribution","volume":"19 1","pages":""},"PeriodicalIF":2.0000,"publicationDate":"2025-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/gtd2.70103","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Iet Generation Transmission & Distribution","FirstCategoryId":"5","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1049/gtd2.70103","RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0
Abstract
Energy-efficient buildings play an important role in driving the transition toward more efficient and sustainable energy systems, especially in green local energy communities. However, incorporating such buildings into larger energy networks poses considerable difficulties, particularly in developing control systems that are both effective and efficient. In this paper, a deep reinforcement learning (DRL) approach is proposed to optimize the control of various decision variables within an integrated energy system including an energy-efficient building. The Markov decision process is formulated to maximize the building's profit from energy transactions with the electricity grid while simultaneously maintaining indoor temperatures at levels preferred by the occupants. To achieve this, the heating, ventilation, and air conditioning (HVAC) system is scheduled using the DRL method. Specifically, a soft actor-critic (SAC) agent is trained to manage an energy system including a real-case energy-efficient building in Lahti city of Finland with HVAC control system, an energy storage system, solar panels, and energy interactions with the grid. The results demonstrate the ability of the SAC agent to learn near-optimal decision-making strategies, increasing the economic performance of while ensuring thermal comfort for residents. This approach highlights the potential of DRL in enhancing both economic and environmental outcomes in energy building management.
期刊介绍:
IET Generation, Transmission & Distribution is intended as a forum for the publication and discussion of current practice and future developments in electric power generation, transmission and distribution. Practical papers in which examples of good present practice can be described and disseminated are particularly sought. Papers of high technical merit relying on mathematical arguments and computation will be considered, but authors are asked to relegate, as far as possible, the details of analysis to an appendix.
The scope of IET Generation, Transmission & Distribution includes the following:
Design of transmission and distribution systems
Operation and control of power generation
Power system management, planning and economics
Power system operation, protection and control
Power system measurement and modelling
Computer applications and computational intelligence in power flexible AC or DC transmission systems
Special Issues. Current Call for papers:
Next Generation of Synchrophasor-based Power System Monitoring, Operation and Control - https://digital-library.theiet.org/files/IET_GTD_CFP_NGSPSMOC.pdf