Microservice-Oriented Workload Prediction Using Deep Learning

Sebastian Ştefan, Virginia Niculescu
{"title":"Microservice-Oriented Workload Prediction Using Deep Learning","authors":"Sebastian Ştefan, Virginia Niculescu","doi":"10.37190/e-inf220107","DOIUrl":null,"url":null,"abstract":"Background: Service oriented architectures are becoming increasingly popular due to their flexibility and scalability which makes them a good fit for cloud deployments. Aim: This research aims to study how an efficient workload prediction mechanism for a practical proactive scaler, could be provided. Such a prediction mechanism is necessary since in order to fully take advantage of on-demand resources and reduce manual tuning, an auto-scaling, preferable predictive, approach is required, which means increasing or decreasing the number of deployed services according to the incoming workloads. Method: In order to achieve the goal, a workload prediction methodology that takes into account microservice concerns is proposed. Since, this should be based on a performant model for prediction, several deep learning algorithms were chosen to be analysed against the classical approaches from the recent research. Experiments have been conducted in order to identify the most appropriate prediction model. Results: The analysis emphasises very good results obtained using the MLP (MultiLayer Perceptron) model, which are better than those obtained with classical time series approaches, with a reduction of the mean error prediction of 49%, when using as data, two Wikipedia traces for 12 days and with two different time windows: 10 and 15min. Conclusion: The tests and the comparison analysis lead to the conclusion that considering the accuracy, but also the computational overhead and the time duration for prediction, MLP model qualifies as a reliable foundation for the development of proactive microservice scaler applications.","PeriodicalId":11452,"journal":{"name":"e Informatica Softw. Eng. J.","volume":"13 1","pages":"220107"},"PeriodicalIF":0.0000,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"e Informatica Softw. Eng. J.","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.37190/e-inf220107","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

Abstract

Background: Service oriented architectures are becoming increasingly popular due to their flexibility and scalability which makes them a good fit for cloud deployments. Aim: This research aims to study how an efficient workload prediction mechanism for a practical proactive scaler, could be provided. Such a prediction mechanism is necessary since in order to fully take advantage of on-demand resources and reduce manual tuning, an auto-scaling, preferable predictive, approach is required, which means increasing or decreasing the number of deployed services according to the incoming workloads. Method: In order to achieve the goal, a workload prediction methodology that takes into account microservice concerns is proposed. Since, this should be based on a performant model for prediction, several deep learning algorithms were chosen to be analysed against the classical approaches from the recent research. Experiments have been conducted in order to identify the most appropriate prediction model. Results: The analysis emphasises very good results obtained using the MLP (MultiLayer Perceptron) model, which are better than those obtained with classical time series approaches, with a reduction of the mean error prediction of 49%, when using as data, two Wikipedia traces for 12 days and with two different time windows: 10 and 15min. Conclusion: The tests and the comparison analysis lead to the conclusion that considering the accuracy, but also the computational overhead and the time duration for prediction, MLP model qualifies as a reliable foundation for the development of proactive microservice scaler applications.
基于深度学习的微服务工作负载预测
背景:面向服务的体系结构由于其灵活性和可伸缩性而变得越来越流行,这使得它们非常适合云部署。目的:本研究旨在研究如何为实际的主动扩展器提供有效的工作负荷预测机制。这种预测机制是必要的,因为为了充分利用按需资源并减少手动调优,需要一种自动伸缩的、更可取的预测方法,这意味着根据传入的工作负载增加或减少已部署服务的数量。方法:为了实现目标,提出了一种考虑微服务关注点的工作负载预测方法。由于这应该基于一个高性能的预测模型,因此我们选择了几种深度学习算法来对最近研究中的经典方法进行分析。为了确定最合适的预测模型,进行了实验。结果:分析强调使用MLP(多层感知器)模型获得的非常好的结果,比经典时间序列方法获得的结果更好,当使用两个维基百科跟踪12天,两个不同的时间窗口:10和15分钟作为数据时,平均误差预测减少了49%。结论:测试和对比分析得出的结论是,考虑到预测的准确性,以及计算开销和预测持续时间,MLP模型可以作为开发主动式微服务规模应用的可靠基础。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信