Deep learning algorithms for temperature prediction in two-phase immersion-cooled data centres

IF 4 3区 工程技术 Q1 MATHEMATICS, INTERDISCIPLINARY APPLICATIONS
Pratheek Suresh, Balaji Chakravarthy
{"title":"Deep learning algorithms for temperature prediction in two-phase immersion-cooled data centres","authors":"Pratheek Suresh, Balaji Chakravarthy","doi":"10.1108/hff-08-2023-0468","DOIUrl":null,"url":null,"abstract":"<h3>Purpose</h3>\n<p>As data centres grow in size and complexity, traditional air-cooling methods are becoming less effective and more expensive. Immersion cooling, where servers are submerged in a dielectric fluid, has emerged as a promising alternative. Ensuring reliable operations in data centre applications requires the development of an effective control framework for immersion cooling systems, which necessitates the prediction of server temperature. While deep learning-based temperature prediction models have shown effectiveness, further enhancement is needed to improve their prediction accuracy. This study aims to develop a temperature prediction model using Long Short-Term Memory (LSTM) Networks based on recursive encoder-decoder architecture.</p><!--/ Abstract__block -->\n<h3>Design/methodology/approach</h3>\n<p>This paper explores the use of deep learning algorithms to predict the temperature of a heater in a two-phase immersion-cooled system using NOVEC 7100. The performance of recursive-long short-term memory-encoder-decoder (R-LSTM-ED), recursive-convolutional neural network-LSTM (R-CNN-LSTM) and R-LSTM approaches are compared using mean absolute error, root mean square error, mean absolute percentage error and coefficient of determination (<em>R</em><sup>2</sup>) as performance metrics. The impact of window size, sampling period and noise within training data on the performance of the model is investigated.</p><!--/ Abstract__block -->\n<h3>Findings</h3>\n<p>The R-LSTM-ED consistently outperforms the R-LSTM model by 6%, 15.8% and 12.5%, and R-CNN-LSTM model by 4%, 11% and 12.3% in all forecast ranges of 10, 30 and 60 s, respectively, averaged across all the workloads considered in the study. The optimum sampling period based on the study is found to be 2 s and the window size to be 60 s. The performance of the model deteriorates significantly as the noise level reaches 10%.</p><!--/ Abstract__block -->\n<h3>Research limitations/implications</h3>\n<p>The proposed models are currently trained on data collected from an experimental setup simulating data centre loads. Future research should seek to extend the applicability of the models by incorporating time series data from immersion-cooled servers.</p><!--/ Abstract__block -->\n<h3>Originality/value</h3>\n<p>The proposed multivariate-recursive-prediction models are trained and tested by using real Data Centre workload traces applied to the immersion-cooled system developed in the laboratory.</p><!--/ Abstract__block -->","PeriodicalId":14263,"journal":{"name":"International Journal of Numerical Methods for Heat & Fluid Flow","volume":"61 1","pages":""},"PeriodicalIF":4.0000,"publicationDate":"2024-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Numerical Methods for Heat & Fluid Flow","FirstCategoryId":"5","ListUrlMain":"https://doi.org/10.1108/hff-08-2023-0468","RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"MATHEMATICS, INTERDISCIPLINARY APPLICATIONS","Score":null,"Total":0}
引用次数: 0

Abstract

Purpose

As data centres grow in size and complexity, traditional air-cooling methods are becoming less effective and more expensive. Immersion cooling, where servers are submerged in a dielectric fluid, has emerged as a promising alternative. Ensuring reliable operations in data centre applications requires the development of an effective control framework for immersion cooling systems, which necessitates the prediction of server temperature. While deep learning-based temperature prediction models have shown effectiveness, further enhancement is needed to improve their prediction accuracy. This study aims to develop a temperature prediction model using Long Short-Term Memory (LSTM) Networks based on recursive encoder-decoder architecture.

Design/methodology/approach

This paper explores the use of deep learning algorithms to predict the temperature of a heater in a two-phase immersion-cooled system using NOVEC 7100. The performance of recursive-long short-term memory-encoder-decoder (R-LSTM-ED), recursive-convolutional neural network-LSTM (R-CNN-LSTM) and R-LSTM approaches are compared using mean absolute error, root mean square error, mean absolute percentage error and coefficient of determination (R2) as performance metrics. The impact of window size, sampling period and noise within training data on the performance of the model is investigated.

Findings

The R-LSTM-ED consistently outperforms the R-LSTM model by 6%, 15.8% and 12.5%, and R-CNN-LSTM model by 4%, 11% and 12.3% in all forecast ranges of 10, 30 and 60 s, respectively, averaged across all the workloads considered in the study. The optimum sampling period based on the study is found to be 2 s and the window size to be 60 s. The performance of the model deteriorates significantly as the noise level reaches 10%.

Research limitations/implications

The proposed models are currently trained on data collected from an experimental setup simulating data centre loads. Future research should seek to extend the applicability of the models by incorporating time series data from immersion-cooled servers.

Originality/value

The proposed multivariate-recursive-prediction models are trained and tested by using real Data Centre workload traces applied to the immersion-cooled system developed in the laboratory.

用于两相浸入式冷却数据中心温度预测的深度学习算法
目的 随着数据中心的规模和复杂程度不断增加,传统的空气冷却方法变得越来越不有效,而且成本越来越高。浸没式冷却(将服务器浸没在介电流体中)已成为一种很有前途的替代方法。要确保数据中心应用的可靠运行,就必须为浸入式冷却系统开发有效的控制框架,这就需要对服务器温度进行预测。虽然基于深度学习的温度预测模型已显示出其有效性,但仍需进一步改进以提高其预测准确性。本研究旨在利用基于递归编码器-解码器架构的长短期记忆(LSTM)网络开发一种温度预测模型。设计/方法/途径 本文探讨了如何利用深度学习算法来预测使用 NOVEC 7100 的两相浸入式冷却系统中加热器的温度。使用平均绝对误差、均方根误差、平均绝对百分比误差和判定系数(R2)作为性能指标,比较了递归-长短期记忆-编码器-解码器(R-LSTM-ED)、递归-卷积神经网络-LSTM(R-CNN-LSTM)和 R-LSTM 方法的性能。研究结果在 10 秒、30 秒和 60 秒的所有预测范围内,R-LSTM-ED 始终优于 R-LSTM 模型 6%、15.8% 和 12.5%,R-CNN-LSTM 模型优于 R-LSTM 模型 4%、11% 和 12.3%。研究发现,最佳采样周期为 2 秒,窗口大小为 60 秒。当噪声水平达到 10% 时,模型的性能会明显下降。研究局限/意义目前,所提出的模型是根据模拟数据中心负载的实验装置收集的数据进行训练的。原创性/价值所提出的多元递归预测模型是通过在实验室开发的浸入式冷却系统中使用真实的数据中心工作负载轨迹进行训练和测试的。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
9.50
自引率
11.90%
发文量
100
审稿时长
6-12 weeks
期刊介绍: The main objective of this international journal is to provide applied mathematicians, engineers and scientists engaged in computer-aided design and research in computational heat transfer and fluid dynamics, whether in academic institutions of industry, with timely and accessible information on the development, refinement and application of computer-based numerical techniques for solving problems in heat and fluid flow. - See more at: http://emeraldgrouppublishing.com/products/journals/journals.htm?id=hff#sthash.Kf80GRt8.dpuf
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信