具有均匀智能体设计和多智能体协调的冷却水系统自适应传递强化学习

IF 7.1 2区 工程技术 Q1 CONSTRUCTION & BUILDING TECHNOLOGY
Zhechao Wang, Zhihong Pang
{"title":"具有均匀智能体设计和多智能体协调的冷却水系统自适应传递强化学习","authors":"Zhechao Wang,&nbsp;Zhihong Pang","doi":"10.1016/j.enbuild.2025.116071","DOIUrl":null,"url":null,"abstract":"<div><div>Transfer Reinforcement Learning (TRL) offers a promising approach to optimizing building cooling water systems by improving both energy efficiency and operational effectiveness. This study introduces a novel TRL framework designed to accelerate the learning process of Reinforcement Learning (RL) agents by systematically leveraging prior experience from analogous systems. Unlike conventional RL approaches that start from scratch, our framework enhances initial performance while mitigating negative transfer through an adaptive multi-agent supervision mechanism. Our methodology involves three key innovations. First, we collect field data from three cooling water systems and train individual data-driven models to enable realistic energy and control simulations. Next, we design a uniform RL model with ratio-based inputs and outputs, ensuring transferability across systems with varying characteristics. This model includes two controllers—one for cooling towers and the other for cooling water pumps. Third, we introduce a structured TRL process in which a pre-trained RL model from a source system is transferred to two target systems. Within this transfer framework, we integrate three types of agents: one completely new agent and two variants of the trained RL model. A key feature of the framework is a supervision mechanism that coordinates these agents for positive transfer. It dynamically adjusts the selection probability of each agent through a constantly updated preference function and modifies learning objectives at different training stages. Various scenarios are tested to evaluate the framework’s performance with different transferred agents and learning stages. Simulation results demonstrate a 10 % improvement in energy savings—both initially and at convergence—compared to learning from scratch. Moreover, the proposed TRL framework effectively mitigates negative transfer and avoids converging to the suboptimal performance of a transferred agent. More importantly, it significantly reduces the effort required to select appropriate source systems, highlighting its practical applicability and potential for widespread adoption in building cooling water system optimization.</div></div>","PeriodicalId":11641,"journal":{"name":"Energy and Buildings","volume":"345 ","pages":"Article 116071"},"PeriodicalIF":7.1000,"publicationDate":"2025-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Adaptive transfer reinforcement learning (TRL) for cooling water systems with uniform agent design and multi-agent coordination\",\"authors\":\"Zhechao Wang,&nbsp;Zhihong Pang\",\"doi\":\"10.1016/j.enbuild.2025.116071\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Transfer Reinforcement Learning (TRL) offers a promising approach to optimizing building cooling water systems by improving both energy efficiency and operational effectiveness. This study introduces a novel TRL framework designed to accelerate the learning process of Reinforcement Learning (RL) agents by systematically leveraging prior experience from analogous systems. Unlike conventional RL approaches that start from scratch, our framework enhances initial performance while mitigating negative transfer through an adaptive multi-agent supervision mechanism. Our methodology involves three key innovations. First, we collect field data from three cooling water systems and train individual data-driven models to enable realistic energy and control simulations. Next, we design a uniform RL model with ratio-based inputs and outputs, ensuring transferability across systems with varying characteristics. This model includes two controllers—one for cooling towers and the other for cooling water pumps. Third, we introduce a structured TRL process in which a pre-trained RL model from a source system is transferred to two target systems. Within this transfer framework, we integrate three types of agents: one completely new agent and two variants of the trained RL model. A key feature of the framework is a supervision mechanism that coordinates these agents for positive transfer. It dynamically adjusts the selection probability of each agent through a constantly updated preference function and modifies learning objectives at different training stages. Various scenarios are tested to evaluate the framework’s performance with different transferred agents and learning stages. Simulation results demonstrate a 10 % improvement in energy savings—both initially and at convergence—compared to learning from scratch. Moreover, the proposed TRL framework effectively mitigates negative transfer and avoids converging to the suboptimal performance of a transferred agent. More importantly, it significantly reduces the effort required to select appropriate source systems, highlighting its practical applicability and potential for widespread adoption in building cooling water system optimization.</div></div>\",\"PeriodicalId\":11641,\"journal\":{\"name\":\"Energy and Buildings\",\"volume\":\"345 \",\"pages\":\"Article 116071\"},\"PeriodicalIF\":7.1000,\"publicationDate\":\"2025-07-05\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Energy and Buildings\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0378778825008011\",\"RegionNum\":2,\"RegionCategory\":\"工程技术\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"CONSTRUCTION & BUILDING TECHNOLOGY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Energy and Buildings","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0378778825008011","RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"CONSTRUCTION & BUILDING TECHNOLOGY","Score":null,"Total":0}
引用次数: 0

摘要

迁移强化学习(TRL)通过提高能源效率和运行效率,为优化建筑冷却水系统提供了一种很有前途的方法。本研究引入了一种新的TRL框架,旨在通过系统地利用类似系统的先前经验来加速强化学习(RL)代理的学习过程。与从零开始的传统强化学习方法不同,我们的框架提高了初始性能,同时通过自适应多智能体监督机制减轻了负迁移。我们的方法包括三个关键创新。首先,我们从三个冷却水系统收集现场数据,并训练单个数据驱动模型,以实现真实的能源和控制模拟。接下来,我们设计了一个具有基于比率的输入和输出的统一强化学习模型,以确保具有不同特征的系统之间的可转移性。该模型包括两个控制器,一个用于冷却塔,另一个用于冷却水泵。第三,我们引入了一个结构化的TRL过程,其中源系统的预训练RL模型被转移到两个目标系统。在这个迁移框架中,我们集成了三种类型的代理:一个全新的代理和训练后的强化学习模型的两个变体。该框架的一个关键特征是一个监督机制,协调这些主体进行正迁移。它通过不断更新的偏好函数动态调整每个智能体的选择概率,并在不同的训练阶段修改学习目标。测试了各种场景,以评估框架在不同转移代理和学习阶段的性能。仿真结果表明,与从零开始学习相比,在初始阶段和收敛阶段都可以节省10%的能源。此外,所提出的TRL框架有效地减轻了负迁移,避免了收敛到被转移代理的次优性能。更重要的是,它大大减少了选择合适的源系统所需的工作量,突出了其在建筑冷却水系统优化中的实用性和广泛采用的潜力。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Adaptive transfer reinforcement learning (TRL) for cooling water systems with uniform agent design and multi-agent coordination
Transfer Reinforcement Learning (TRL) offers a promising approach to optimizing building cooling water systems by improving both energy efficiency and operational effectiveness. This study introduces a novel TRL framework designed to accelerate the learning process of Reinforcement Learning (RL) agents by systematically leveraging prior experience from analogous systems. Unlike conventional RL approaches that start from scratch, our framework enhances initial performance while mitigating negative transfer through an adaptive multi-agent supervision mechanism. Our methodology involves three key innovations. First, we collect field data from three cooling water systems and train individual data-driven models to enable realistic energy and control simulations. Next, we design a uniform RL model with ratio-based inputs and outputs, ensuring transferability across systems with varying characteristics. This model includes two controllers—one for cooling towers and the other for cooling water pumps. Third, we introduce a structured TRL process in which a pre-trained RL model from a source system is transferred to two target systems. Within this transfer framework, we integrate three types of agents: one completely new agent and two variants of the trained RL model. A key feature of the framework is a supervision mechanism that coordinates these agents for positive transfer. It dynamically adjusts the selection probability of each agent through a constantly updated preference function and modifies learning objectives at different training stages. Various scenarios are tested to evaluate the framework’s performance with different transferred agents and learning stages. Simulation results demonstrate a 10 % improvement in energy savings—both initially and at convergence—compared to learning from scratch. Moreover, the proposed TRL framework effectively mitigates negative transfer and avoids converging to the suboptimal performance of a transferred agent. More importantly, it significantly reduces the effort required to select appropriate source systems, highlighting its practical applicability and potential for widespread adoption in building cooling water system optimization.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Energy and Buildings
Energy and Buildings 工程技术-工程:土木
CiteScore
12.70
自引率
11.90%
发文量
863
审稿时长
38 days
期刊介绍: An international journal devoted to investigations of energy use and efficiency in buildings Energy and Buildings is an international journal publishing articles with explicit links to energy use in buildings. The aim is to present new research results, and new proven practice aimed at reducing the energy needs of a building and improving indoor environment quality.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信