现有办公楼多冷水机组暖通空调系统的强化学习优化控制方法

H Y Wang, Q. Ge, C Ma, T. Cui
{"title":"现有办公楼多冷水机组暖通空调系统的强化学习优化控制方法","authors":"H Y Wang, Q. Ge, C Ma, T. Cui","doi":"10.1088/1755-1315/1372/1/012096","DOIUrl":null,"url":null,"abstract":"\n Given that buildings consume approximately 33% of global energy, and HVAC systems contribute nearly half of a building’s total energy demand, optimizing their efficiency is imperative for sustainable energy use. Many existing buildings operate HVAC systems inefficiently, displaying non-stationary behavior. Current reinforcement learning (RL) training methods rely on historical data, which is often obtained through costly modeling or trial-and-error methods in real buildings. This paper introduces a novel reinforcement learning construction framework designed to improve the robustness and learning speed of RL control while reducing learning costs. The framework is specifically tailored for existing office buildings. Applying this framework to control HVAC systems in real office buildings in Beijing, engineering practice results demonstrate: during the data collection phase, energy efficiency surpasses traditional rule-based control methods from the previous year, achieving significantly improved energy performance (a 17.27% reduction) with minimal comfort sacrifices. The system achieves acceptable robustness, learning speed, and control stability. Reduced ongoing manual supervision leads to savings in optimization labor. Systematic exploration of actions required for RL training lays the foundation for RL algorithm development. Furthermore, by leveraging collected data, a reinforcement learning control algorithm is established, validating the reliability of this approach. This construction framework reduces the prerequisites for historical data and models, providing an acceptable alternative for systems with insufficient data or equipment conditions.","PeriodicalId":506254,"journal":{"name":"IOP Conference Series: Earth and Environmental Science","volume":"11 11 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Reinforcement learning optimal control method for multi chiller HVAC system in an existing office building\",\"authors\":\"H Y Wang, Q. Ge, C Ma, T. Cui\",\"doi\":\"10.1088/1755-1315/1372/1/012096\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"\\n Given that buildings consume approximately 33% of global energy, and HVAC systems contribute nearly half of a building’s total energy demand, optimizing their efficiency is imperative for sustainable energy use. Many existing buildings operate HVAC systems inefficiently, displaying non-stationary behavior. Current reinforcement learning (RL) training methods rely on historical data, which is often obtained through costly modeling or trial-and-error methods in real buildings. This paper introduces a novel reinforcement learning construction framework designed to improve the robustness and learning speed of RL control while reducing learning costs. The framework is specifically tailored for existing office buildings. Applying this framework to control HVAC systems in real office buildings in Beijing, engineering practice results demonstrate: during the data collection phase, energy efficiency surpasses traditional rule-based control methods from the previous year, achieving significantly improved energy performance (a 17.27% reduction) with minimal comfort sacrifices. The system achieves acceptable robustness, learning speed, and control stability. Reduced ongoing manual supervision leads to savings in optimization labor. Systematic exploration of actions required for RL training lays the foundation for RL algorithm development. Furthermore, by leveraging collected data, a reinforcement learning control algorithm is established, validating the reliability of this approach. This construction framework reduces the prerequisites for historical data and models, providing an acceptable alternative for systems with insufficient data or equipment conditions.\",\"PeriodicalId\":506254,\"journal\":{\"name\":\"IOP Conference Series: Earth and Environmental Science\",\"volume\":\"11 11 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-07-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IOP Conference Series: Earth and Environmental Science\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1088/1755-1315/1372/1/012096\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IOP Conference Series: Earth and Environmental Science","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1088/1755-1315/1372/1/012096","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

鉴于建筑物消耗的能源约占全球能源总量的 33%,而暖通空调系统的能耗占建筑物总能耗需求的近一半,因此,要实现能源的可持续利用,就必须优化暖通空调系统的能效。许多现有建筑的暖通空调系统运行效率低下,表现出非稳态行为。目前的强化学习(RL)训练方法依赖于历史数据,而这些数据通常是通过在真实建筑中进行昂贵的建模或试错方法获得的。本文介绍了一种新型强化学习构建框架,旨在提高 RL 控制的鲁棒性和学习速度,同时降低学习成本。该框架专为现有办公建筑量身定制。工程实践结果表明:在数据收集阶段,能效超过了上一年传统的基于规则的控制方法,显著提高了能源性能(降低 17.27%),并将舒适度牺牲降到最低。该系统在鲁棒性、学习速度和控制稳定性方面都达到了可接受的水平。减少了持续的人工监督,从而节省了优化人力。系统地探索 RL 训练所需的操作为 RL 算法的开发奠定了基础。此外,通过利用收集到的数据,建立了强化学习控制算法,验证了这种方法的可靠性。这种构建框架降低了对历史数据和模型的要求,为数据或设备条件不足的系统提供了一种可接受的替代方法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Reinforcement learning optimal control method for multi chiller HVAC system in an existing office building
Given that buildings consume approximately 33% of global energy, and HVAC systems contribute nearly half of a building’s total energy demand, optimizing their efficiency is imperative for sustainable energy use. Many existing buildings operate HVAC systems inefficiently, displaying non-stationary behavior. Current reinforcement learning (RL) training methods rely on historical data, which is often obtained through costly modeling or trial-and-error methods in real buildings. This paper introduces a novel reinforcement learning construction framework designed to improve the robustness and learning speed of RL control while reducing learning costs. The framework is specifically tailored for existing office buildings. Applying this framework to control HVAC systems in real office buildings in Beijing, engineering practice results demonstrate: during the data collection phase, energy efficiency surpasses traditional rule-based control methods from the previous year, achieving significantly improved energy performance (a 17.27% reduction) with minimal comfort sacrifices. The system achieves acceptable robustness, learning speed, and control stability. Reduced ongoing manual supervision leads to savings in optimization labor. Systematic exploration of actions required for RL training lays the foundation for RL algorithm development. Furthermore, by leveraging collected data, a reinforcement learning control algorithm is established, validating the reliability of this approach. This construction framework reduces the prerequisites for historical data and models, providing an acceptable alternative for systems with insufficient data or equipment conditions.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信