{"title":"Incremental learning user profile and deep reinforcement learning for managing building energy in heating water","authors":"Linfei Yin, Yi Xiong","doi":"10.1016/j.energy.2024.133705","DOIUrl":null,"url":null,"abstract":"<div><div>Deep reinforcement learning (DRL) has garnered growing attention as a data-driven control technique in the field of built environments. However, the existing DRL approaches for managing water systems cannot consider information from multiple time steps, are prone to overestimation, fall into the problem of locally optimal solutions, and fail to cope with time-varying environments, resulting in an inability to minimize energy consumption while considering water comfort and hygiene of occupants. Therefore, this study proposes an incremental learning user profile and deep reinforcement learning (ILUPDRL) method for controlling hot water systems. This study employs hot water user profiles to reflect the hot water demand (HWD) habits. The proposed ILUPDRL addresses the challenges arising from evolving HWD through incremental learning of hot water user profiles. Moreover, to enable the ILUPDRL to consider information from multiple time steps, this study proposes the recurrent proximal policy optimization (RPPO) algorithm and integrates the RPPO into the ILUPDRL. The simulation results show that the ILUPDRL achieves up to 67.53 % energy savings while considering the water comfort and water hygiene of occupants.</div></div>","PeriodicalId":11647,"journal":{"name":"Energy","volume":"313 ","pages":"Article 133705"},"PeriodicalIF":9.0000,"publicationDate":"2024-11-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Energy","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0360544224034832","RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENERGY & FUELS","Score":null,"Total":0}
引用次数: 0
Abstract
Deep reinforcement learning (DRL) has garnered growing attention as a data-driven control technique in the field of built environments. However, the existing DRL approaches for managing water systems cannot consider information from multiple time steps, are prone to overestimation, fall into the problem of locally optimal solutions, and fail to cope with time-varying environments, resulting in an inability to minimize energy consumption while considering water comfort and hygiene of occupants. Therefore, this study proposes an incremental learning user profile and deep reinforcement learning (ILUPDRL) method for controlling hot water systems. This study employs hot water user profiles to reflect the hot water demand (HWD) habits. The proposed ILUPDRL addresses the challenges arising from evolving HWD through incremental learning of hot water user profiles. Moreover, to enable the ILUPDRL to consider information from multiple time steps, this study proposes the recurrent proximal policy optimization (RPPO) algorithm and integrates the RPPO into the ILUPDRL. The simulation results show that the ILUPDRL achieves up to 67.53 % energy savings while considering the water comfort and water hygiene of occupants.
期刊介绍:
Energy is a multidisciplinary, international journal that publishes research and analysis in the field of energy engineering. Our aim is to become a leading peer-reviewed platform and a trusted source of information for energy-related topics.
The journal covers a range of areas including mechanical engineering, thermal sciences, and energy analysis. We are particularly interested in research on energy modelling, prediction, integrated energy systems, planning, and management.
Additionally, we welcome papers on energy conservation, efficiency, biomass and bioenergy, renewable energy, electricity supply and demand, energy storage, buildings, and economic and policy issues. These topics should align with our broader multidisciplinary focus.