Green Data Center Cooling Control via Physics-Guided Safe Reinforcement Learning

IF 2 Q3 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS
Ruihang Wang, Zhi-Ying Cao, Xiaoxia Zhou, Yonggang Wen, Rui Tan
{"title":"Green Data Center Cooling Control via Physics-Guided Safe Reinforcement Learning","authors":"Ruihang Wang, Zhi-Ying Cao, Xiaoxia Zhou, Yonggang Wen, Rui Tan","doi":"10.1145/3582577","DOIUrl":null,"url":null,"abstract":"Deep reinforcement learning (DRL) has shown good performance in tackling Markov decision process (MDP) problems. As DRL optimizes a long-term reward, it is a promising approach to improving the energy efficiency of data center cooling. However, enforcement of thermal safety constraints during DRL’s state exploration is a main challenge. The widely adopted reward shaping approach adds negative reward when the exploratory action results in unsafety. Thus, it needs to experience sufficient unsafe states before it learns how to prevent unsafety. In this paper, we propose a safety-aware DRL framework for data center cooling control. It applies offline imitation learning and online post-hoc rectification to holistically prevent thermal unsafety during online DRL. In particular, the post-hoc rectification searches for the minimum modification to the DRL-recommended action such that the rectified action will not result in unsafety. The rectification is designed based on a thermal state transition model that is fitted using historical safe operation traces and able to extrapolate the transitions to unsafe states explored by DRL. Extensive evaluation for chilled water and direct expansion-cooled data centers in two climate conditions show that our approach saves 18% to 26.6% of total data center power compared with conventional control and reduces safety violations by 94.5% to 99% compared with reward shaping. We also extend the proposed framework to address data centers with non-uniform temperature distributions for detailed safety considerations. The evaluation shows that our approach saves 14% power usage compared with the PID control while addressing safety compliance during the training.","PeriodicalId":7055,"journal":{"name":"ACM Transactions on Cyber-Physical Systems","volume":" ","pages":""},"PeriodicalIF":2.0000,"publicationDate":"2023-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"ACM Transactions on Cyber-Physical Systems","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3582577","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS","Score":null,"Total":0}
引用次数: 0

Abstract

Deep reinforcement learning (DRL) has shown good performance in tackling Markov decision process (MDP) problems. As DRL optimizes a long-term reward, it is a promising approach to improving the energy efficiency of data center cooling. However, enforcement of thermal safety constraints during DRL’s state exploration is a main challenge. The widely adopted reward shaping approach adds negative reward when the exploratory action results in unsafety. Thus, it needs to experience sufficient unsafe states before it learns how to prevent unsafety. In this paper, we propose a safety-aware DRL framework for data center cooling control. It applies offline imitation learning and online post-hoc rectification to holistically prevent thermal unsafety during online DRL. In particular, the post-hoc rectification searches for the minimum modification to the DRL-recommended action such that the rectified action will not result in unsafety. The rectification is designed based on a thermal state transition model that is fitted using historical safe operation traces and able to extrapolate the transitions to unsafe states explored by DRL. Extensive evaluation for chilled water and direct expansion-cooled data centers in two climate conditions show that our approach saves 18% to 26.6% of total data center power compared with conventional control and reduces safety violations by 94.5% to 99% compared with reward shaping. We also extend the proposed framework to address data centers with non-uniform temperature distributions for detailed safety considerations. The evaluation shows that our approach saves 14% power usage compared with the PID control while addressing safety compliance during the training.
基于物理指导的安全强化学习的绿色数据中心冷却控制
深度强化学习(DRL)在解决马尔可夫决策过程(MDP)问题方面表现出良好的性能。由于DRL优化了长期回报,它是提高数据中心冷却能源效率的一种有前途的方法。然而,在DRL的状态勘探过程中,热安全约束的实施是主要的挑战。当探索性行为导致不安全时,普遍采用的奖励塑造方法增加了负奖励。因此,在学习如何预防不安全之前,它需要经历足够多的不安全状态。在本文中,我们提出了一个安全感知的数据中心冷却控制DRL框架。采用离线模仿学习和在线事后整改,从整体上防止在线DRL过程中的热不安全。特别地,事后整改搜索对drl推荐的操作的最小修改,使纠正后的操作不会导致不安全。整流设计基于热态转变模型,该模型使用历史安全运行轨迹拟合,并能够推断DRL探索的不安全状态的转变。对两种气候条件下的冷冻水和直接膨胀冷却数据中心的广泛评估表明,与传统控制相比,我们的方法节省了数据中心总电力的18%至26.6%,与奖励形成相比,减少了94.5%至99%的安全违规行为。我们还扩展了所提出的框架,以解决具有非均匀温度分布的数据中心的详细安全考虑。评估表明,与PID控制相比,我们的方法节省了14%的功耗,同时在训练过程中解决了安全合规问题。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
ACM Transactions on Cyber-Physical Systems
ACM Transactions on Cyber-Physical Systems COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS-
CiteScore
5.70
自引率
4.30%
发文量
40
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信