Sarvar Hussain Nengroo , Dongsoo Har , Hoon Jeong , Taewook Heo , Sangkeum Lee
{"title":"Continuous variable quantum reinforcement learning for HVAC control and power management in residential building","authors":"Sarvar Hussain Nengroo , Dongsoo Har , Hoon Jeong , Taewook Heo , Sangkeum Lee","doi":"10.1016/j.egyai.2025.100541","DOIUrl":null,"url":null,"abstract":"<div><div>The use of occupancy information for heating, ventilation, and air conditioning (HVAC) control in smart buildings has become increasingly important for enhancing energy efficiency and occupant comfort. However, residential HVAC control presents significant challenges due to the complex dynamic nature of buildings and the uncertainties associated with heat loads and weather conditions. This study addresses this gap in adaptive and energy efficient HVAC control by introducing a quantum reinforcement learning (QRL) based approach. Unlike conventional reinforcement learning techniques, the QRL leverages quantum computing principles to efficiently handle high dimensional state and action spaces, enabling more precise HVAC control in multi-zone residential buildings. The proposed framework integrates real-time occupancy detection using deep learning with operational data, including power consumption patterns, air conditioner control data, and external temperature variations. To evaluate the effectiveness of the proposed approach, simulations were conducted using real world data from 26 residential households over a three month period. The results demonstrate that the QRL based HVAC control significantly reduces energy consumption and electricity costs while maintaining thermal comfort. Compared to the deep deterministic policy gradient method, the QRL approach achieved a 63% reduction in power consumption and a 64.4% decrease in electricity costs. Similarly, it outperformed the proximal policy optimization algorithm, leading to an average reduction of 62.5% in electricity costs and 62.4% in power consumption.</div></div>","PeriodicalId":34138,"journal":{"name":"Energy and AI","volume":"21 ","pages":"Article 100541"},"PeriodicalIF":9.6000,"publicationDate":"2025-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Energy and AI","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2666546825000734","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
The use of occupancy information for heating, ventilation, and air conditioning (HVAC) control in smart buildings has become increasingly important for enhancing energy efficiency and occupant comfort. However, residential HVAC control presents significant challenges due to the complex dynamic nature of buildings and the uncertainties associated with heat loads and weather conditions. This study addresses this gap in adaptive and energy efficient HVAC control by introducing a quantum reinforcement learning (QRL) based approach. Unlike conventional reinforcement learning techniques, the QRL leverages quantum computing principles to efficiently handle high dimensional state and action spaces, enabling more precise HVAC control in multi-zone residential buildings. The proposed framework integrates real-time occupancy detection using deep learning with operational data, including power consumption patterns, air conditioner control data, and external temperature variations. To evaluate the effectiveness of the proposed approach, simulations were conducted using real world data from 26 residential households over a three month period. The results demonstrate that the QRL based HVAC control significantly reduces energy consumption and electricity costs while maintaining thermal comfort. Compared to the deep deterministic policy gradient method, the QRL approach achieved a 63% reduction in power consumption and a 64.4% decrease in electricity costs. Similarly, it outperformed the proximal policy optimization algorithm, leading to an average reduction of 62.5% in electricity costs and 62.4% in power consumption.