AS-ODB: Multivariate Attention Supervised Learning Based Optimized DBN Approach for Cloud Workload Prediction

IF 0.8 Q4 OPTICS
G. M. Kiran, A. Aparna Rajesh, D. Basavesha
{"title":"AS-ODB: Multivariate Attention Supervised Learning Based Optimized DBN Approach for Cloud Workload Prediction","authors":"G. M. Kiran,&nbsp;A. Aparna Rajesh,&nbsp;D. Basavesha","doi":"10.3103/S1060992X25700122","DOIUrl":null,"url":null,"abstract":"<p>Attainable on demand cloud computing makes it feasible to access a centralized shared pool of computing resources. Accurate estimation of cloud workload is necessary for optimal performance and effective use of cloud computing resources. Because cloud workloads are dynamic and unpredictable, this is a problematic problem. In this case, deep learning can provide reliable foundations for workload prediction in data centres when trained appropriately. In the proposed model, efficient workload prediction is executed out using novel deep learning. Efficient management of these hyperparameters may significantly improve the neural network model’s performance. Using the data centre’s workload traces at many consecutive time steps, the suggested approach is shown to be able to estimate Central Processing Unit (CPU) utilization. Collects raw data retrieved from the storage, including the number and type of requests, virtual machine (VMs) costs, and resource usage. Discover patterns and oscillations in the workload trace by preprocessing the data to increase the prediction efficacy of this model. During data pre-processing, the KCR approach, min max normalization, and data cleaning are used to select the important properties from raw data samples, eliminate noise, and normalize them. After that, a sliding window is used for deep learning processing to convert multivariate data into time series with supervised learning. Next, utilize a deep belief network based on green anaconda optimization (GrA-DBN) to attain precise workload forecasting. Comparing the suggested methodology with existing models, experimental results show that it provides a better trade-off between accuracy and training time. The suggested method provides higher performance, with an execution time of 28.5 s and an accuracy rate of 93.60%. According to the simulation results, the GrA-DBN workload prediction method performs better than other algorithms.</p>","PeriodicalId":721,"journal":{"name":"Optical Memory and Neural Networks","volume":"34 3","pages":"389 - 401"},"PeriodicalIF":0.8000,"publicationDate":"2025-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Optical Memory and Neural Networks","FirstCategoryId":"1085","ListUrlMain":"https://link.springer.com/article/10.3103/S1060992X25700122","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"OPTICS","Score":null,"Total":0}
引用次数: 0

Abstract

Attainable on demand cloud computing makes it feasible to access a centralized shared pool of computing resources. Accurate estimation of cloud workload is necessary for optimal performance and effective use of cloud computing resources. Because cloud workloads are dynamic and unpredictable, this is a problematic problem. In this case, deep learning can provide reliable foundations for workload prediction in data centres when trained appropriately. In the proposed model, efficient workload prediction is executed out using novel deep learning. Efficient management of these hyperparameters may significantly improve the neural network model’s performance. Using the data centre’s workload traces at many consecutive time steps, the suggested approach is shown to be able to estimate Central Processing Unit (CPU) utilization. Collects raw data retrieved from the storage, including the number and type of requests, virtual machine (VMs) costs, and resource usage. Discover patterns and oscillations in the workload trace by preprocessing the data to increase the prediction efficacy of this model. During data pre-processing, the KCR approach, min max normalization, and data cleaning are used to select the important properties from raw data samples, eliminate noise, and normalize them. After that, a sliding window is used for deep learning processing to convert multivariate data into time series with supervised learning. Next, utilize a deep belief network based on green anaconda optimization (GrA-DBN) to attain precise workload forecasting. Comparing the suggested methodology with existing models, experimental results show that it provides a better trade-off between accuracy and training time. The suggested method provides higher performance, with an execution time of 28.5 s and an accuracy rate of 93.60%. According to the simulation results, the GrA-DBN workload prediction method performs better than other algorithms.

Abstract Image

Abstract Image

基于多变量注意监督学习的优化DBN云工作负载预测方法
可实现的随需应变云计算使得访问集中的共享计算资源池成为可能。准确估计云工作负载对于优化性能和有效使用云计算资源是必要的。因为云工作负载是动态的和不可预测的,所以这是一个有问题的问题。在这种情况下,经过适当的训练,深度学习可以为数据中心的工作负载预测提供可靠的基础。在该模型中,利用新颖的深度学习实现了高效的工作负荷预测。对这些超参数的有效管理可以显著提高神经网络模型的性能。使用数据中心在许多连续时间步长的工作负载跟踪,所建议的方法能够估计中央处理单元(Central Processing Unit, CPU)的利用率。收集从存储检索到的原始数据,包括请求的数量和类型、虚拟机(vm)成本和资源使用情况。通过对数据进行预处理,发现工作负载跟踪中的模式和振荡,从而提高该模型的预测效率。在数据预处理过程中,使用KCR方法、最小最大归一化和数据清洗从原始数据样本中选择重要属性,消除噪声并进行归一化。之后,使用滑动窗口进行深度学习处理,通过监督学习将多变量数据转换为时间序列。其次,利用基于绿蟒蛇优化的深度信念网络(GrA-DBN)实现精确的工作量预测。实验结果表明,该方法能较好地平衡训练时间和准确率。该方法具有较高的性能,执行时间为28.5 s,准确率为93.60%。仿真结果表明,GrA-DBN工作负载预测方法的性能优于其他算法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
1.50
自引率
11.10%
发文量
25
期刊介绍: The journal covers a wide range of issues in information optics such as optical memory, mechanisms for optical data recording and processing, photosensitive materials, optical, optoelectronic and holographic nanostructures, and many other related topics. Papers on memory systems using holographic and biological structures and concepts of brain operation are also included. The journal pays particular attention to research in the field of neural net systems that may lead to a new generation of computional technologies by endowing them with intelligence.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信