Maintaining SLOs of Cloud-Native Applications Via Self-Adaptive Resource Sharing

Vladimir Podolskiy, Michael Mayo, Abigail M. Y. Koay, M. Gerndt, Panos Patros
{"title":"Maintaining SLOs of Cloud-Native Applications Via Self-Adaptive Resource Sharing","authors":"Vladimir Podolskiy, Michael Mayo, Abigail M. Y. Koay, M. Gerndt, Panos Patros","doi":"10.1109/SASO.2019.00018","DOIUrl":null,"url":null,"abstract":"With changing workloads, cloud service providers can leverage vertical container scaling (adding/removing resources) so that Service Level Objective (SLO) violations are minimized and spare resources are maximized. In this paper, we investigate a solution to the self-adaptive problem of vertical elasticity for co-located containerized applications. First, the system learns performance models that relate SLOs to workload, resource limits and service level indicators. Second, it derives limits that meet SLOs and minimize resource consumption via a combination of optimization and restricted brute-force search. Third, it vertically scales containers based on the derived limits. We evaluated our technique on a Kubernetes private cloud of 8 nodes with three deployed applications. The results registered two SLO violations out of 16 validation tests; acceptably low derivation times facilitate realistic deployment. Violations are primarily attributed to application specifics, such as garbage collection, which require further research to be circumvented.","PeriodicalId":259990,"journal":{"name":"2019 IEEE 13th International Conference on Self-Adaptive and Self-Organizing Systems (SASO)","volume":"98 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"16","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 IEEE 13th International Conference on Self-Adaptive and Self-Organizing Systems (SASO)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SASO.2019.00018","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 16

Abstract

With changing workloads, cloud service providers can leverage vertical container scaling (adding/removing resources) so that Service Level Objective (SLO) violations are minimized and spare resources are maximized. In this paper, we investigate a solution to the self-adaptive problem of vertical elasticity for co-located containerized applications. First, the system learns performance models that relate SLOs to workload, resource limits and service level indicators. Second, it derives limits that meet SLOs and minimize resource consumption via a combination of optimization and restricted brute-force search. Third, it vertically scales containers based on the derived limits. We evaluated our technique on a Kubernetes private cloud of 8 nodes with three deployed applications. The results registered two SLO violations out of 16 validation tests; acceptably low derivation times facilitate realistic deployment. Violations are primarily attributed to application specifics, such as garbage collection, which require further research to be circumvented.
通过自适应资源共享维护云原生应用的slo
随着工作负载的变化,云服务提供商可以利用垂直容器扩展(添加/删除资源),从而最大限度地减少违反服务水平目标(SLO)的情况,并最大限度地利用备用资源。在本文中,我们研究了同一位置容器化应用的垂直弹性自适应问题的解决方案。首先,系统学习将slo与工作负载、资源限制和服务水平指标联系起来的性能模型。其次,它通过优化和有限制的暴力搜索的结合,得出满足SLOs和最小化资源消耗的限制。第三,根据导出的极限垂直缩放容器。我们在Kubernetes私有云上评估了我们的技术,该私有云有8个节点,部署了三个应用程序。结果显示,在16次验证测试中,有两次违反了SLO;可接受的低派生时间有助于实际部署。违规主要归因于应用程序的细节,例如垃圾收集,这需要进一步的研究来规避。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信