混合深度学习和进化算法,实现准确的云计算工作量预测

IF 3.3 3区 计算机科学 Q2 COMPUTER SCIENCE, THEORY & METHODS
Tassawar Ali, Hikmat Ullah Khan, Fawaz Khaled Alarfaj, Mohammed AlReshoodi
{"title":"混合深度学习和进化算法,实现准确的云计算工作量预测","authors":"Tassawar Ali, Hikmat Ullah Khan, Fawaz Khaled Alarfaj, Mohammed AlReshoodi","doi":"10.1007/s00607-024-01340-8","DOIUrl":null,"url":null,"abstract":"<p>Cloud computing offers demand-based allocation of required resources to its clients ensuring optimal use of resources in a cost-effective manner. However, due to the massive increase in demand for physical resources by datacenters cloud management suffers from inefficient resource management. To enhance efficiency by reducing resource setup time, workload prediction has become an active research area. It helps to make management decisions proactively and enables the cloud management system to better respond to spikes in the workload. This study proposes a hybrid model combining both state-of-the-art deep learning models and evolutionary algorithms for workload prediction. The proposed cluster-based differential evolution neural network model utilizes differential evolution for the optimization of feature weights of the deep neural network to predict the future workloads of a cloud datacenter. The proposed model uses a novel mutation strategy that clusters the population based on an agglomerative technique and chooses the best gene from randomly chosen clusters. Thus, the strategy creates a balance between the exploration and exploitation of the population and enables the model to avoid local optima and converge rapidly. The datasets used for the experiments are created from Google’s real-world traces and the Alibaba platform. The model is compared with backpropagation, Adam optimizer-based LSTM, and an evolutionary neural network-based three-mutation policy. We evaluated the performance of the proposed model in terms of root mean squared error in predicting the upcoming CPU, RAM, and BW usage. The proposed model achieved an error rate as low as 0.0002 to outperform the existing studies in the relevant literature. To further authenticate the results, we performed the statistical analysis of the obtained results in terms of R-squared, mean bias deviation, 90th percentile score, and Theil’s U statistics. The high accuracy and automaticity of the proposed model have paved the way for its application in diverse areas of cloud computing, including real-time applications.</p>","PeriodicalId":10718,"journal":{"name":"Computing","volume":"10 1","pages":""},"PeriodicalIF":3.3000,"publicationDate":"2024-08-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Hybrid deep learning and evolutionary algorithms for accurate cloud workload prediction\",\"authors\":\"Tassawar Ali, Hikmat Ullah Khan, Fawaz Khaled Alarfaj, Mohammed AlReshoodi\",\"doi\":\"10.1007/s00607-024-01340-8\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>Cloud computing offers demand-based allocation of required resources to its clients ensuring optimal use of resources in a cost-effective manner. However, due to the massive increase in demand for physical resources by datacenters cloud management suffers from inefficient resource management. To enhance efficiency by reducing resource setup time, workload prediction has become an active research area. It helps to make management decisions proactively and enables the cloud management system to better respond to spikes in the workload. This study proposes a hybrid model combining both state-of-the-art deep learning models and evolutionary algorithms for workload prediction. The proposed cluster-based differential evolution neural network model utilizes differential evolution for the optimization of feature weights of the deep neural network to predict the future workloads of a cloud datacenter. The proposed model uses a novel mutation strategy that clusters the population based on an agglomerative technique and chooses the best gene from randomly chosen clusters. Thus, the strategy creates a balance between the exploration and exploitation of the population and enables the model to avoid local optima and converge rapidly. The datasets used for the experiments are created from Google’s real-world traces and the Alibaba platform. The model is compared with backpropagation, Adam optimizer-based LSTM, and an evolutionary neural network-based three-mutation policy. We evaluated the performance of the proposed model in terms of root mean squared error in predicting the upcoming CPU, RAM, and BW usage. The proposed model achieved an error rate as low as 0.0002 to outperform the existing studies in the relevant literature. To further authenticate the results, we performed the statistical analysis of the obtained results in terms of R-squared, mean bias deviation, 90th percentile score, and Theil’s U statistics. The high accuracy and automaticity of the proposed model have paved the way for its application in diverse areas of cloud computing, including real-time applications.</p>\",\"PeriodicalId\":10718,\"journal\":{\"name\":\"Computing\",\"volume\":\"10 1\",\"pages\":\"\"},\"PeriodicalIF\":3.3000,\"publicationDate\":\"2024-08-25\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Computing\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://doi.org/10.1007/s00607-024-01340-8\",\"RegionNum\":3,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, THEORY & METHODS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computing","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1007/s00607-024-01340-8","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, THEORY & METHODS","Score":null,"Total":0}
引用次数: 0

摘要

云计算根据客户需求分配所需资源,确保以经济高效的方式优化使用资源。然而,由于数据中心对物理资源的需求大量增加,云计算管理存在资源管理效率低下的问题。为了通过减少资源设置时间来提高效率,工作量预测已成为一个活跃的研究领域。它有助于主动做出管理决策,使云管理系统能够更好地应对工作负载的激增。本研究提出了一种混合模型,将最先进的深度学习模型和进化算法相结合,用于工作量预测。所提出的基于集群的差分进化神经网络模型利用差分进化来优化深度神经网络的特征权重,从而预测云数据中心未来的工作负载。所提出的模型采用了一种新颖的突变策略,该策略基于聚类技术对种群进行聚类,并从随机选择的聚类中选择最佳基因。因此,该策略在种群的探索和利用之间建立了平衡,使模型能够避免局部最优并快速收敛。实验所使用的数据集来自谷歌的真实跟踪和阿里巴巴平台。我们将该模型与反向传播、基于亚当优化器的 LSTM 以及基于进化神经网络的三突变策略进行了比较。我们从均方根误差的角度评估了所提模型在预测即将到来的 CPU、RAM 和 BW 使用率方面的性能。拟议模型的误差率低至 0.0002,优于相关文献中的现有研究。为了进一步验证结果,我们对所获得的结果进行了统计分析,包括 R 方、平均偏差、第 90 百分位数得分和 Theil's U 统计量。所提模型的高准确性和自动性为其在云计算各领域(包括实时应用)的应用铺平了道路。
本文章由计算机程序翻译,如有差异,请以英文原文为准。

Hybrid deep learning and evolutionary algorithms for accurate cloud workload prediction

Hybrid deep learning and evolutionary algorithms for accurate cloud workload prediction

Cloud computing offers demand-based allocation of required resources to its clients ensuring optimal use of resources in a cost-effective manner. However, due to the massive increase in demand for physical resources by datacenters cloud management suffers from inefficient resource management. To enhance efficiency by reducing resource setup time, workload prediction has become an active research area. It helps to make management decisions proactively and enables the cloud management system to better respond to spikes in the workload. This study proposes a hybrid model combining both state-of-the-art deep learning models and evolutionary algorithms for workload prediction. The proposed cluster-based differential evolution neural network model utilizes differential evolution for the optimization of feature weights of the deep neural network to predict the future workloads of a cloud datacenter. The proposed model uses a novel mutation strategy that clusters the population based on an agglomerative technique and chooses the best gene from randomly chosen clusters. Thus, the strategy creates a balance between the exploration and exploitation of the population and enables the model to avoid local optima and converge rapidly. The datasets used for the experiments are created from Google’s real-world traces and the Alibaba platform. The model is compared with backpropagation, Adam optimizer-based LSTM, and an evolutionary neural network-based three-mutation policy. We evaluated the performance of the proposed model in terms of root mean squared error in predicting the upcoming CPU, RAM, and BW usage. The proposed model achieved an error rate as low as 0.0002 to outperform the existing studies in the relevant literature. To further authenticate the results, we performed the statistical analysis of the obtained results in terms of R-squared, mean bias deviation, 90th percentile score, and Theil’s U statistics. The high accuracy and automaticity of the proposed model have paved the way for its application in diverse areas of cloud computing, including real-time applications.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Computing
Computing 工程技术-计算机:理论方法
CiteScore
8.20
自引率
2.70%
发文量
107
审稿时长
3 months
期刊介绍: Computing publishes original papers, short communications and surveys on all fields of computing. The contributions should be written in English and may be of theoretical or applied nature, the essential criteria are computational relevance and systematic foundation of results.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信