具有动作掩蔽的联邦近端策略优化:在集体供热系统中的应用

IF 9.6 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Sara Ghane , Stef Jacobs , Furkan Elmaz , Thomas Huybrechts , Ivan Verhaert , Siegfried Mercelis
{"title":"具有动作掩蔽的联邦近端策略优化:在集体供热系统中的应用","authors":"Sara Ghane ,&nbsp;Stef Jacobs ,&nbsp;Furkan Elmaz ,&nbsp;Thomas Huybrechts ,&nbsp;Ivan Verhaert ,&nbsp;Siegfried Mercelis","doi":"10.1016/j.egyai.2025.100506","DOIUrl":null,"url":null,"abstract":"<div><div>This paper introduces a novel privacy-aware Federated Proximal Policy Optimization (FPPO) method combined with action masking. As a Federated Reinforcement Learning (FRL) approach, the proposed method is used for optimizing the reloading of Domestic Hot Water (DHW) storage tanks, with a focus on energy savings and DHW thermal comfort in collective heating systems. The proposed approach combines FedProx as the Federated Learning (FL) method and Proximal Policy Optimization (PPO) as the Deep Reinforcement Learning (DRL) technique to address the challenges of distributed control while ensuring data privacy. Key contributions include: (1) employing action masking to guarantee compliance with comfort level, (2) designing a global reward function to align agents actions toward collective energy savings, (3) implementing a privacy-aware design where only model parameters are shared with a global aggregator, avoiding raw data transmission, and (4) optimizing PPO’s loss function for improved performance.</div><div>PPO was benchmarked using a common FL method (FedAvg) alongside two other DRL methods, where PPO outperformed both in scalability and energy savings, especially in larger systems. Then, PPO-based FRL was refined into FPPO by integrating a proximal term with coefficient <span><math><mi>μ</mi></math></span> into the loss function to enhance the performance. Experiments were conducted with both fixed and dynamically adjusted <span><math><mi>μ</mi></math></span>, with the latter demonstrating better energy savings and comfort. Results show that FPPO achieves up to 10.08% energy savings while maintaining DHW discomfort below 8.72% in systems with at least 20 dwellings. These findings highlight FPPO as a scalable, privacy-aware, and energy-efficient solution for distributed control in collective heating systems.</div></div>","PeriodicalId":34138,"journal":{"name":"Energy and AI","volume":"20 ","pages":"Article 100506"},"PeriodicalIF":9.6000,"publicationDate":"2025-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Federated proximal policy optimization with action masking: Application in collective heating systems\",\"authors\":\"Sara Ghane ,&nbsp;Stef Jacobs ,&nbsp;Furkan Elmaz ,&nbsp;Thomas Huybrechts ,&nbsp;Ivan Verhaert ,&nbsp;Siegfried Mercelis\",\"doi\":\"10.1016/j.egyai.2025.100506\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>This paper introduces a novel privacy-aware Federated Proximal Policy Optimization (FPPO) method combined with action masking. As a Federated Reinforcement Learning (FRL) approach, the proposed method is used for optimizing the reloading of Domestic Hot Water (DHW) storage tanks, with a focus on energy savings and DHW thermal comfort in collective heating systems. The proposed approach combines FedProx as the Federated Learning (FL) method and Proximal Policy Optimization (PPO) as the Deep Reinforcement Learning (DRL) technique to address the challenges of distributed control while ensuring data privacy. Key contributions include: (1) employing action masking to guarantee compliance with comfort level, (2) designing a global reward function to align agents actions toward collective energy savings, (3) implementing a privacy-aware design where only model parameters are shared with a global aggregator, avoiding raw data transmission, and (4) optimizing PPO’s loss function for improved performance.</div><div>PPO was benchmarked using a common FL method (FedAvg) alongside two other DRL methods, where PPO outperformed both in scalability and energy savings, especially in larger systems. Then, PPO-based FRL was refined into FPPO by integrating a proximal term with coefficient <span><math><mi>μ</mi></math></span> into the loss function to enhance the performance. Experiments were conducted with both fixed and dynamically adjusted <span><math><mi>μ</mi></math></span>, with the latter demonstrating better energy savings and comfort. Results show that FPPO achieves up to 10.08% energy savings while maintaining DHW discomfort below 8.72% in systems with at least 20 dwellings. These findings highlight FPPO as a scalable, privacy-aware, and energy-efficient solution for distributed control in collective heating systems.</div></div>\",\"PeriodicalId\":34138,\"journal\":{\"name\":\"Energy and AI\",\"volume\":\"20 \",\"pages\":\"Article 100506\"},\"PeriodicalIF\":9.6000,\"publicationDate\":\"2025-03-27\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Energy and AI\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S2666546825000382\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Energy and AI","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2666546825000382","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

摘要

提出了一种结合动作掩蔽的隐私感知联邦近端策略优化(FPPO)方法。作为一种联邦强化学习(FRL)方法,所提出的方法用于优化生活热水(DHW)储罐的重新装载,重点是节能和DHW热舒适性。所提出的方法将FedProx作为联邦学习(FL)方法和近端策略优化(PPO)作为深度强化学习(DRL)技术相结合,以解决分布式控制的挑战,同时确保数据隐私。主要贡献包括:(1)采用动作掩蔽来保证舒适度的遵从性;(2)设计一个全局奖励函数,使智能体的行动与集体节能相一致;(3)实现一个隐私感知设计,其中只有模型参数与全局聚合器共享,避免了原始数据的传输;(4)优化PPO的损失函数以提高性能。使用通用的FL方法(FedAvg)和其他两种DRL方法对PPO进行了基准测试,其中PPO在可扩展性和节能方面都优于其他方法,特别是在大型系统中。然后,通过在损失函数中加入系数为μ的近端项,将基于ppo的FRL细化为FPPO,以提高性能。对固定和动态调节的μ进行了实验,动态调节的μ表现出更好的节能和舒适性。结果表明,在至少有20个住宅的系统中,FPPO实现了高达10.08%的节能,同时将DHW不适保持在8.72%以下。这些发现强调了FPPO是一种可扩展的、具有隐私意识的、节能的解决方案,适用于集体供暖系统的分布式控制。
本文章由计算机程序翻译,如有差异,请以英文原文为准。

Federated proximal policy optimization with action masking: Application in collective heating systems

Federated proximal policy optimization with action masking: Application in collective heating systems
This paper introduces a novel privacy-aware Federated Proximal Policy Optimization (FPPO) method combined with action masking. As a Federated Reinforcement Learning (FRL) approach, the proposed method is used for optimizing the reloading of Domestic Hot Water (DHW) storage tanks, with a focus on energy savings and DHW thermal comfort in collective heating systems. The proposed approach combines FedProx as the Federated Learning (FL) method and Proximal Policy Optimization (PPO) as the Deep Reinforcement Learning (DRL) technique to address the challenges of distributed control while ensuring data privacy. Key contributions include: (1) employing action masking to guarantee compliance with comfort level, (2) designing a global reward function to align agents actions toward collective energy savings, (3) implementing a privacy-aware design where only model parameters are shared with a global aggregator, avoiding raw data transmission, and (4) optimizing PPO’s loss function for improved performance.
PPO was benchmarked using a common FL method (FedAvg) alongside two other DRL methods, where PPO outperformed both in scalability and energy savings, especially in larger systems. Then, PPO-based FRL was refined into FPPO by integrating a proximal term with coefficient μ into the loss function to enhance the performance. Experiments were conducted with both fixed and dynamically adjusted μ, with the latter demonstrating better energy savings and comfort. Results show that FPPO achieves up to 10.08% energy savings while maintaining DHW discomfort below 8.72% in systems with at least 20 dwellings. These findings highlight FPPO as a scalable, privacy-aware, and energy-efficient solution for distributed control in collective heating systems.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Energy and AI
Energy and AI Engineering-Engineering (miscellaneous)
CiteScore
16.50
自引率
0.00%
发文量
64
审稿时长
56 days
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信