Martin Straesser , Stefan Geissler , Stanislav Lange , Lukas Kilian Schumann , Tobias Hossfeld , Samuel Kounev
{"title":"Trust your local scaler: A continuous, decentralized approach to autoscaling","authors":"Martin Straesser , Stefan Geissler , Stanislav Lange , Lukas Kilian Schumann , Tobias Hossfeld , Samuel Kounev","doi":"10.1016/j.peva.2024.102452","DOIUrl":null,"url":null,"abstract":"<div><div>Autoscaling is a critical component of modern cloud computing environments, improving flexibility, efficiency, and cost-effectiveness. Current approaches use centralized autoscalers that make decisions based on averaged monitoring data of managed service instances in fixed intervals. In this scheme, autoscalers are single points of failure, tightly coupled to monitoring systems, and limited in reaction times, making non-optimal scaling decisions costly. This paper presents an approach for continuous decentralized autoscaling, where decisions are made on a service instance level. By distributing scaling decisions of instances over time, autoscaling evolves into a quasi-continuous process, enabling great adaptability to different workload patterns. We analyze our approach on different abstraction levels, including a model-based, simulation-based, and real-world evaluation. Proof-of-concept experiments show that our approach is able to scale different applications deployed in containers and virtual machines in realistic environments, yielding better scaling performance compared to established baseline autoscalers, especially in scenarios with highly dynamic workloads.</div></div>","PeriodicalId":19964,"journal":{"name":"Performance Evaluation","volume":"167 ","pages":"Article 102452"},"PeriodicalIF":1.0000,"publicationDate":"2024-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Performance Evaluation","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0166531624000579","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE","Score":null,"Total":0}
引用次数: 0
Abstract
Autoscaling is a critical component of modern cloud computing environments, improving flexibility, efficiency, and cost-effectiveness. Current approaches use centralized autoscalers that make decisions based on averaged monitoring data of managed service instances in fixed intervals. In this scheme, autoscalers are single points of failure, tightly coupled to monitoring systems, and limited in reaction times, making non-optimal scaling decisions costly. This paper presents an approach for continuous decentralized autoscaling, where decisions are made on a service instance level. By distributing scaling decisions of instances over time, autoscaling evolves into a quasi-continuous process, enabling great adaptability to different workload patterns. We analyze our approach on different abstraction levels, including a model-based, simulation-based, and real-world evaluation. Proof-of-concept experiments show that our approach is able to scale different applications deployed in containers and virtual machines in realistic environments, yielding better scaling performance compared to established baseline autoscalers, especially in scenarios with highly dynamic workloads.
期刊介绍:
Performance Evaluation functions as a leading journal in the area of modeling, measurement, and evaluation of performance aspects of computing and communication systems. As such, it aims to present a balanced and complete view of the entire Performance Evaluation profession. Hence, the journal is interested in papers that focus on one or more of the following dimensions:
-Define new performance evaluation tools, including measurement and monitoring tools as well as modeling and analytic techniques
-Provide new insights into the performance of computing and communication systems
-Introduce new application areas where performance evaluation tools can play an important role and creative new uses for performance evaluation tools.
More specifically, common application areas of interest include the performance of:
-Resource allocation and control methods and algorithms (e.g. routing and flow control in networks, bandwidth allocation, processor scheduling, memory management)
-System architecture, design and implementation
-Cognitive radio
-VANETs
-Social networks and media
-Energy efficient ICT
-Energy harvesting
-Data centers
-Data centric networks
-System reliability
-System tuning and capacity planning
-Wireless and sensor networks
-Autonomic and self-organizing systems
-Embedded systems
-Network science