Trust your local scaler: A continuous, decentralized approach to autoscaling

IF 1 4区 计算机科学 Q4 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE
Martin Straesser , Stefan Geissler , Stanislav Lange , Lukas Kilian Schumann , Tobias Hossfeld , Samuel Kounev
{"title":"Trust your local scaler: A continuous, decentralized approach to autoscaling","authors":"Martin Straesser ,&nbsp;Stefan Geissler ,&nbsp;Stanislav Lange ,&nbsp;Lukas Kilian Schumann ,&nbsp;Tobias Hossfeld ,&nbsp;Samuel Kounev","doi":"10.1016/j.peva.2024.102452","DOIUrl":null,"url":null,"abstract":"<div><div>Autoscaling is a critical component of modern cloud computing environments, improving flexibility, efficiency, and cost-effectiveness. Current approaches use centralized autoscalers that make decisions based on averaged monitoring data of managed service instances in fixed intervals. In this scheme, autoscalers are single points of failure, tightly coupled to monitoring systems, and limited in reaction times, making non-optimal scaling decisions costly. This paper presents an approach for continuous decentralized autoscaling, where decisions are made on a service instance level. By distributing scaling decisions of instances over time, autoscaling evolves into a quasi-continuous process, enabling great adaptability to different workload patterns. We analyze our approach on different abstraction levels, including a model-based, simulation-based, and real-world evaluation. Proof-of-concept experiments show that our approach is able to scale different applications deployed in containers and virtual machines in realistic environments, yielding better scaling performance compared to established baseline autoscalers, especially in scenarios with highly dynamic workloads.</div></div>","PeriodicalId":19964,"journal":{"name":"Performance Evaluation","volume":"167 ","pages":"Article 102452"},"PeriodicalIF":1.0000,"publicationDate":"2024-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Performance Evaluation","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0166531624000579","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE","Score":null,"Total":0}
引用次数: 0

Abstract

Autoscaling is a critical component of modern cloud computing environments, improving flexibility, efficiency, and cost-effectiveness. Current approaches use centralized autoscalers that make decisions based on averaged monitoring data of managed service instances in fixed intervals. In this scheme, autoscalers are single points of failure, tightly coupled to monitoring systems, and limited in reaction times, making non-optimal scaling decisions costly. This paper presents an approach for continuous decentralized autoscaling, where decisions are made on a service instance level. By distributing scaling decisions of instances over time, autoscaling evolves into a quasi-continuous process, enabling great adaptability to different workload patterns. We analyze our approach on different abstraction levels, including a model-based, simulation-based, and real-world evaluation. Proof-of-concept experiments show that our approach is able to scale different applications deployed in containers and virtual machines in realistic environments, yielding better scaling performance compared to established baseline autoscalers, especially in scenarios with highly dynamic workloads.
信任您的本地扩展器持续、分散的自动扩展方法
自动扩展是现代云计算环境的重要组成部分,可提高灵活性、效率和成本效益。当前的方法使用集中式自动扩展器,根据固定时间间隔内托管服务实例的平均监控数据做出决策。在这种方案中,自动扩展器是单点故障,与监控系统紧密耦合,反应时间有限,使得非最佳扩展决策成本高昂。本文提出了一种持续分散式自动扩展方法,可在服务实例级别上做出决策。通过在一段时间内分散实例的扩展决策,自动扩展演变成了一个准连续过程,从而极大地适应了不同的工作负载模式。我们从不同的抽象层面分析了我们的方法,包括基于模型、基于仿真和基于真实世界的评估。概念验证实验表明,我们的方法能够在现实环境中扩展部署在容器和虚拟机中的不同应用,与已有的基线自动扩展器相比,尤其是在工作负载高度动态的场景中,能产生更好的扩展性能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Performance Evaluation
Performance Evaluation 工程技术-计算机:理论方法
CiteScore
3.10
自引率
0.00%
发文量
20
审稿时长
24 days
期刊介绍: Performance Evaluation functions as a leading journal in the area of modeling, measurement, and evaluation of performance aspects of computing and communication systems. As such, it aims to present a balanced and complete view of the entire Performance Evaluation profession. Hence, the journal is interested in papers that focus on one or more of the following dimensions: -Define new performance evaluation tools, including measurement and monitoring tools as well as modeling and analytic techniques -Provide new insights into the performance of computing and communication systems -Introduce new application areas where performance evaluation tools can play an important role and creative new uses for performance evaluation tools. More specifically, common application areas of interest include the performance of: -Resource allocation and control methods and algorithms (e.g. routing and flow control in networks, bandwidth allocation, processor scheduling, memory management) -System architecture, design and implementation -Cognitive radio -VANETs -Social networks and media -Energy efficient ICT -Energy harvesting -Data centers -Data centric networks -System reliability -System tuning and capacity planning -Wireless and sensor networks -Autonomic and self-organizing systems -Embedded systems -Network science
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信