结合 DVFS 和动态风扇控制的基于多代理强化学习的服务器能效优化方法

IF 3.8 3区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE
Wenjun Lin , Weiwei Lin , Jianpeng Lin , Haocheng Zhong , Jiangtao Wang , Ligang He
{"title":"结合 DVFS 和动态风扇控制的基于多代理强化学习的服务器能效优化方法","authors":"Wenjun Lin ,&nbsp;Weiwei Lin ,&nbsp;Jianpeng Lin ,&nbsp;Haocheng Zhong ,&nbsp;Jiangtao Wang ,&nbsp;Ligang He","doi":"10.1016/j.suscom.2024.100977","DOIUrl":null,"url":null,"abstract":"<div><p>With the rapid development of the digital economy and intelligent industry, the energy consumption of data centers (DCs) has increased significantly. Various optimization methods are proposed to improve the energy efficiency of servers in DCs. However, existing solutions usually adopt model-based heuristics and best practices to select operations, which are not universally applicable. Moreover, existing works primarily focus on the optimization methods for individual components, with a lack of work on the joint optimization of multiple components. Therefore, we propose a multi-agent reinforcement learning-based method, named MRDF, combining DVFS and dynamic fan control to achieve a trade-off between power consumption and performance while satisfying thermal constraints. MRDF is model-free and learns by continuously interacting with the real server without prior knowledge. To enhance the stability of MRDF in dynamic environments, we design a data-driven baseline comparison method to evaluate the actual contribution of a single agent to the global reward. In addition, an improved Q-learning is proposed to deal with the large state and action space of the multi-core server. We implement MRDF on a Huawei Taishan 200 server and verify the effectiveness by running benchmarks. Experimental results show that the proposed method improves energy efficiency by an average of 3.9% compared to the best baseline solution, while flexibly adapting to different thermal constraints.</p></div>","PeriodicalId":48686,"journal":{"name":"Sustainable Computing-Informatics & Systems","volume":"42 ","pages":"Article 100977"},"PeriodicalIF":3.8000,"publicationDate":"2024-02-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A multi-agent reinforcement learning-based method for server energy efficiency optimization combining DVFS and dynamic fan control\",\"authors\":\"Wenjun Lin ,&nbsp;Weiwei Lin ,&nbsp;Jianpeng Lin ,&nbsp;Haocheng Zhong ,&nbsp;Jiangtao Wang ,&nbsp;Ligang He\",\"doi\":\"10.1016/j.suscom.2024.100977\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>With the rapid development of the digital economy and intelligent industry, the energy consumption of data centers (DCs) has increased significantly. Various optimization methods are proposed to improve the energy efficiency of servers in DCs. However, existing solutions usually adopt model-based heuristics and best practices to select operations, which are not universally applicable. Moreover, existing works primarily focus on the optimization methods for individual components, with a lack of work on the joint optimization of multiple components. Therefore, we propose a multi-agent reinforcement learning-based method, named MRDF, combining DVFS and dynamic fan control to achieve a trade-off between power consumption and performance while satisfying thermal constraints. MRDF is model-free and learns by continuously interacting with the real server without prior knowledge. To enhance the stability of MRDF in dynamic environments, we design a data-driven baseline comparison method to evaluate the actual contribution of a single agent to the global reward. In addition, an improved Q-learning is proposed to deal with the large state and action space of the multi-core server. We implement MRDF on a Huawei Taishan 200 server and verify the effectiveness by running benchmarks. Experimental results show that the proposed method improves energy efficiency by an average of 3.9% compared to the best baseline solution, while flexibly adapting to different thermal constraints.</p></div>\",\"PeriodicalId\":48686,\"journal\":{\"name\":\"Sustainable Computing-Informatics & Systems\",\"volume\":\"42 \",\"pages\":\"Article 100977\"},\"PeriodicalIF\":3.8000,\"publicationDate\":\"2024-02-08\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Sustainable Computing-Informatics & Systems\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S2210537924000222\",\"RegionNum\":3,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Sustainable Computing-Informatics & Systems","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2210537924000222","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE","Score":null,"Total":0}
引用次数: 0

摘要

随着数字经济和智能产业的快速发展,数据中心(DC)的能耗大幅增加。为了提高 DC 中服务器的能效,人们提出了各种优化方法。然而,现有的解决方案通常采用基于模型的启发式方法和最佳实践来选择操作,并不具有普遍适用性。此外,现有研究主要关注单个组件的优化方法,缺乏对多个组件进行联合优化的研究。因此,我们提出了一种基于多代理强化学习的方法(名为 MRDF),该方法结合了 DVFS 和动态风扇控制,在满足散热约束的同时,实现了功耗和性能之间的权衡。MRDF 是无模型的,通过与真实服务器的持续交互进行学习,而无需事先了解情况。为了增强 MRDF 在动态环境中的稳定性,我们设计了一种数据驱动的基线比较方法,用于评估单个代理对全局奖励的实际贡献。此外,我们还提出了一种改进的 Q-learning 方法,以处理多核服务器的大型状态和行动空间。我们在华为泰山 200 服务器上实现了 MRDF,并通过运行基准测试验证了其有效性。实验结果表明,与最佳基准解决方案相比,所提出的方法平均提高了 3.9% 的能效,同时还能灵活适应不同的热约束。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
A multi-agent reinforcement learning-based method for server energy efficiency optimization combining DVFS and dynamic fan control

With the rapid development of the digital economy and intelligent industry, the energy consumption of data centers (DCs) has increased significantly. Various optimization methods are proposed to improve the energy efficiency of servers in DCs. However, existing solutions usually adopt model-based heuristics and best practices to select operations, which are not universally applicable. Moreover, existing works primarily focus on the optimization methods for individual components, with a lack of work on the joint optimization of multiple components. Therefore, we propose a multi-agent reinforcement learning-based method, named MRDF, combining DVFS and dynamic fan control to achieve a trade-off between power consumption and performance while satisfying thermal constraints. MRDF is model-free and learns by continuously interacting with the real server without prior knowledge. To enhance the stability of MRDF in dynamic environments, we design a data-driven baseline comparison method to evaluate the actual contribution of a single agent to the global reward. In addition, an improved Q-learning is proposed to deal with the large state and action space of the multi-core server. We implement MRDF on a Huawei Taishan 200 server and verify the effectiveness by running benchmarks. Experimental results show that the proposed method improves energy efficiency by an average of 3.9% compared to the best baseline solution, while flexibly adapting to different thermal constraints.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Sustainable Computing-Informatics & Systems
Sustainable Computing-Informatics & Systems COMPUTER SCIENCE, HARDWARE & ARCHITECTUREC-COMPUTER SCIENCE, INFORMATION SYSTEMS
CiteScore
10.70
自引率
4.40%
发文量
142
期刊介绍: Sustainable computing is a rapidly expanding research area spanning the fields of computer science and engineering, electrical engineering as well as other engineering disciplines. The aim of Sustainable Computing: Informatics and Systems (SUSCOM) is to publish the myriad research findings related to energy-aware and thermal-aware management of computing resource. Equally important is a spectrum of related research issues such as applications of computing that can have ecological and societal impacts. SUSCOM publishes original and timely research papers and survey articles in current areas of power, energy, temperature, and environment related research areas of current importance to readers. SUSCOM has an editorial board comprising prominent researchers from around the world and selects competitively evaluated peer-reviewed papers.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信