Chi Zhang, S. Kuppannagari, R. Kannan, V. Prasanna
{"title":"基于神经网络的模型逼近强化学习的建筑暖通空调调度","authors":"Chi Zhang, S. Kuppannagari, R. Kannan, V. Prasanna","doi":"10.1145/3360322.3360861","DOIUrl":null,"url":null,"abstract":"Buildings sector is one of the major consumers of energy in the United States. The buildings HVAC (Heating, Ventilation, and Air Conditioning) systems, whose functionality is to maintain thermal comfort and indoor air quality (IAQ), account for almost half of the energy consumed by the buildings. Thus, intelligent scheduling of the building HVAC system has the potential for tremendous energy and cost savings while ensuring that the control objectives (thermal comfort, air quality) are satisfied. Traditionally, rule-based and model-based approaches such as linear-quadratic regulator (LQR) have been used for scheduling HVAC. However, the system complexity of HVAC and the dynamism in the building environment limit the accuracy, efficiency and robustness of such methods. Recently, several works have focused on model-free deep reinforcement learning based techniques such as Deep Q-Network (DQN). Such methods require extensive interactions with the environment. Thus, they are impractical to implement in real systems due to low sample efficiency. Safety-aware exploration is another challenge in real systems since certain actions at particular states may result in catastrophic outcomes. To address these issues and challenges, we propose a modelbased reinforcement learning approach that learns the system dynamics using a neural network. Then, we adopt Model Predictive Control (MPC) using the learned system dynamics to perform control with random-sampling shooting method. To ensure safe exploration, we limit the actions within safe range and the maximum absolute change of actions according to prior knowledge. We evaluate our ideas through simulation using widely adopted EnergyPlus tool on a case study consisting of a two zone data-center. Experiments show that the average deviation of the trajectories sampled from the learned dynamics and the ground truth is below 20%. Compared with baseline approaches, we reduce the total energy consumption by 17.1% ~ 21.8%. Compared with model-free reinforcement learning approach, we reduce the required number of training steps to converge by 10x.","PeriodicalId":128826,"journal":{"name":"Proceedings of the 6th ACM International Conference on Systems for Energy-Efficient Buildings, Cities, and Transportation","volume":"1996 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"75","resultStr":"{\"title\":\"Building HVAC Scheduling Using Reinforcement Learning via Neural Network Based Model Approximation\",\"authors\":\"Chi Zhang, S. Kuppannagari, R. Kannan, V. Prasanna\",\"doi\":\"10.1145/3360322.3360861\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Buildings sector is one of the major consumers of energy in the United States. The buildings HVAC (Heating, Ventilation, and Air Conditioning) systems, whose functionality is to maintain thermal comfort and indoor air quality (IAQ), account for almost half of the energy consumed by the buildings. Thus, intelligent scheduling of the building HVAC system has the potential for tremendous energy and cost savings while ensuring that the control objectives (thermal comfort, air quality) are satisfied. Traditionally, rule-based and model-based approaches such as linear-quadratic regulator (LQR) have been used for scheduling HVAC. However, the system complexity of HVAC and the dynamism in the building environment limit the accuracy, efficiency and robustness of such methods. Recently, several works have focused on model-free deep reinforcement learning based techniques such as Deep Q-Network (DQN). Such methods require extensive interactions with the environment. Thus, they are impractical to implement in real systems due to low sample efficiency. Safety-aware exploration is another challenge in real systems since certain actions at particular states may result in catastrophic outcomes. To address these issues and challenges, we propose a modelbased reinforcement learning approach that learns the system dynamics using a neural network. Then, we adopt Model Predictive Control (MPC) using the learned system dynamics to perform control with random-sampling shooting method. To ensure safe exploration, we limit the actions within safe range and the maximum absolute change of actions according to prior knowledge. We evaluate our ideas through simulation using widely adopted EnergyPlus tool on a case study consisting of a two zone data-center. Experiments show that the average deviation of the trajectories sampled from the learned dynamics and the ground truth is below 20%. Compared with baseline approaches, we reduce the total energy consumption by 17.1% ~ 21.8%. Compared with model-free reinforcement learning approach, we reduce the required number of training steps to converge by 10x.\",\"PeriodicalId\":128826,\"journal\":{\"name\":\"Proceedings of the 6th ACM International Conference on Systems for Energy-Efficient Buildings, Cities, and Transportation\",\"volume\":\"1996 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2019-10-11\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"75\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 6th ACM International Conference on Systems for Energy-Efficient Buildings, Cities, and Transportation\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3360322.3360861\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 6th ACM International Conference on Systems for Energy-Efficient Buildings, Cities, and Transportation","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3360322.3360861","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 75
摘要
建筑行业是美国能源的主要消费者之一。建筑的暖通空调(Heating, Ventilation, and Air Conditioning)系统,其功能是保持热舒适和室内空气质量(IAQ),几乎占建筑能耗的一半。因此,建筑暖通空调系统的智能调度在确保满足控制目标(热舒适、空气质量)的同时,具有巨大的能源和成本节约潜力。传统的调度方法主要是基于规则和基于模型的方法,如线性二次型调节器(LQR)。然而,暖通空调系统的复杂性和建筑环境的动态性限制了这些方法的准确性、高效性和鲁棒性。最近,一些研究集中在基于无模型深度强化学习的技术上,如深度Q-Network (DQN)。这种方法需要与环境进行广泛的相互作用。因此,由于样本效率低,它们在实际系统中是不切实际的。在现实系统中,具有安全意识的探索是另一个挑战,因为在特定状态下的某些行为可能会导致灾难性的结果。为了解决这些问题和挑战,我们提出了一种基于模型的强化学习方法,该方法使用神经网络学习系统动力学。然后,采用模型预测控制(MPC),利用学习到的系统动力学进行随机抽样射击控制。为了保证勘探的安全,我们根据先验知识将动作限制在安全范围内,动作的绝对变化最大。我们通过使用广泛采用的EnergyPlus工具对一个由两个区域数据中心组成的案例研究进行模拟来评估我们的想法。实验表明,从学习动力学中采样的轨迹与地面真实值的平均偏差在20%以下。与基准方法相比,总能耗降低了17.1% ~ 21.8%。与无模型强化学习方法相比,我们将收敛所需的训练步数减少了10倍。
Building HVAC Scheduling Using Reinforcement Learning via Neural Network Based Model Approximation
Buildings sector is one of the major consumers of energy in the United States. The buildings HVAC (Heating, Ventilation, and Air Conditioning) systems, whose functionality is to maintain thermal comfort and indoor air quality (IAQ), account for almost half of the energy consumed by the buildings. Thus, intelligent scheduling of the building HVAC system has the potential for tremendous energy and cost savings while ensuring that the control objectives (thermal comfort, air quality) are satisfied. Traditionally, rule-based and model-based approaches such as linear-quadratic regulator (LQR) have been used for scheduling HVAC. However, the system complexity of HVAC and the dynamism in the building environment limit the accuracy, efficiency and robustness of such methods. Recently, several works have focused on model-free deep reinforcement learning based techniques such as Deep Q-Network (DQN). Such methods require extensive interactions with the environment. Thus, they are impractical to implement in real systems due to low sample efficiency. Safety-aware exploration is another challenge in real systems since certain actions at particular states may result in catastrophic outcomes. To address these issues and challenges, we propose a modelbased reinforcement learning approach that learns the system dynamics using a neural network. Then, we adopt Model Predictive Control (MPC) using the learned system dynamics to perform control with random-sampling shooting method. To ensure safe exploration, we limit the actions within safe range and the maximum absolute change of actions according to prior knowledge. We evaluate our ideas through simulation using widely adopted EnergyPlus tool on a case study consisting of a two zone data-center. Experiments show that the average deviation of the trajectories sampled from the learned dynamics and the ground truth is below 20%. Compared with baseline approaches, we reduce the total energy consumption by 17.1% ~ 21.8%. Compared with model-free reinforcement learning approach, we reduce the required number of training steps to converge by 10x.