基于深度q -学习的多源能量数据中心能量优化算法

Hui Yu, Mingxiu. Tong
{"title":"基于深度q -学习的多源能量数据中心能量优化算法","authors":"Hui Yu, Mingxiu. Tong","doi":"10.1109/AIAM57466.2022.00079","DOIUrl":null,"url":null,"abstract":"More and more data centers are supplied by multi-source energy. However, the features of random, uncertain, and time-varying of renewable energy has made it difficult to achieve good results with traditional methods. In this paper, we research how to coordinate multiple energy sources (such as wind power, solar, and smart grids) to reduce energy costs of data centers. We propose a deep Q-learning (DQN) algorithm based on the auto encoder to control the energy consumption of data center. Our algorithm uses the auto encoder to approximate the Q-value function, learning the expected cost based on the state of current system. It solves the problem that the Q-value function in traditional Q-learning algorithm is difficultly designed under multi-constraint conditions, and it can converge by any state of the system to obtain the optimal solution. In order to further improve the convergence speed and accuracy of the algorithm. We design a parameter optimization strategy to solve the slow convergence problem of the algorithm. This strategy is based on the experience replay technology to optimize the parameters of algorithm. We conducted extensive experiments based on real- world data, and evaluated the performance of our algorithm. The experiment results show that our algorithm can average save 20% energy cost so as to bring a set of safe and highly available solution to meet the requirements of multi-Source energy for data centers.","PeriodicalId":439903,"journal":{"name":"2022 4th International Conference on Artificial Intelligence and Advanced Manufacturing (AIAM)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"An Energy Optimization Algorithm for Data Centers based on Deep Q-learning with Multi-Source Energy\",\"authors\":\"Hui Yu, Mingxiu. Tong\",\"doi\":\"10.1109/AIAM57466.2022.00079\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"More and more data centers are supplied by multi-source energy. However, the features of random, uncertain, and time-varying of renewable energy has made it difficult to achieve good results with traditional methods. In this paper, we research how to coordinate multiple energy sources (such as wind power, solar, and smart grids) to reduce energy costs of data centers. We propose a deep Q-learning (DQN) algorithm based on the auto encoder to control the energy consumption of data center. Our algorithm uses the auto encoder to approximate the Q-value function, learning the expected cost based on the state of current system. It solves the problem that the Q-value function in traditional Q-learning algorithm is difficultly designed under multi-constraint conditions, and it can converge by any state of the system to obtain the optimal solution. In order to further improve the convergence speed and accuracy of the algorithm. We design a parameter optimization strategy to solve the slow convergence problem of the algorithm. This strategy is based on the experience replay technology to optimize the parameters of algorithm. We conducted extensive experiments based on real- world data, and evaluated the performance of our algorithm. The experiment results show that our algorithm can average save 20% energy cost so as to bring a set of safe and highly available solution to meet the requirements of multi-Source energy for data centers.\",\"PeriodicalId\":439903,\"journal\":{\"name\":\"2022 4th International Conference on Artificial Intelligence and Advanced Manufacturing (AIAM)\",\"volume\":\"30 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-10-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 4th International Conference on Artificial Intelligence and Advanced Manufacturing (AIAM)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/AIAM57466.2022.00079\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 4th International Conference on Artificial Intelligence and Advanced Manufacturing (AIAM)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/AIAM57466.2022.00079","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

越来越多的数据中心采用多源能源供电。然而,可再生能源具有随机性、不确定性、时变等特点,使得传统的方法难以取得良好的效果。本文研究了如何协调多种能源(如风能、太阳能和智能电网)来降低数据中心的能源成本。提出了一种基于自动编码器的深度q -学习(DQN)算法来控制数据中心的能耗。我们的算法使用自动编码器逼近q值函数,学习基于当前系统状态的期望代价。它解决了传统q -学习算法中q值函数在多约束条件下难以设计的问题,并且可以通过系统的任何状态收敛得到最优解。为了进一步提高算法的收敛速度和精度。为了解决算法收敛缓慢的问题,设计了参数优化策略。该策略基于经验回放技术对算法参数进行优化。我们基于真实世界的数据进行了大量的实验,并评估了我们的算法的性能。实验结果表明,该算法平均可节省20%的能源成本,为数据中心的多源能源需求提供了一套安全、高可用的解决方案。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
An Energy Optimization Algorithm for Data Centers based on Deep Q-learning with Multi-Source Energy
More and more data centers are supplied by multi-source energy. However, the features of random, uncertain, and time-varying of renewable energy has made it difficult to achieve good results with traditional methods. In this paper, we research how to coordinate multiple energy sources (such as wind power, solar, and smart grids) to reduce energy costs of data centers. We propose a deep Q-learning (DQN) algorithm based on the auto encoder to control the energy consumption of data center. Our algorithm uses the auto encoder to approximate the Q-value function, learning the expected cost based on the state of current system. It solves the problem that the Q-value function in traditional Q-learning algorithm is difficultly designed under multi-constraint conditions, and it can converge by any state of the system to obtain the optimal solution. In order to further improve the convergence speed and accuracy of the algorithm. We design a parameter optimization strategy to solve the slow convergence problem of the algorithm. This strategy is based on the experience replay technology to optimize the parameters of algorithm. We conducted extensive experiments based on real- world data, and evaluated the performance of our algorithm. The experiment results show that our algorithm can average save 20% energy cost so as to bring a set of safe and highly available solution to meet the requirements of multi-Source energy for data centers.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信