Amoeba: QoS-Awareness and Reduced Resource Usage of Microservices with Serverless Computing

Zijun Li, Quan Chen, Shuai Xue, Tao Ma, Yong Yang, Zhuo Song, M. Guo
{"title":"Amoeba: QoS-Awareness and Reduced Resource Usage of Microservices with Serverless Computing","authors":"Zijun Li, Quan Chen, Shuai Xue, Tao Ma, Yong Yang, Zhuo Song, M. Guo","doi":"10.1109/IPDPS47924.2020.00049","DOIUrl":null,"url":null,"abstract":"While microservices that have stringent Quality-of-Service constraints are deployed in the Clouds, the long-term rented infrastructures that host the microservices are under-utilized except peak hours due to the diurnal load pattern. It is resource efficient for Cloud vendors and cost efficient for service maintainers to deploy the microservices in the long-term infrastructure at high load and in the serverless computing platform at low load. However, prior work fails to take advantage of the opportunity, because the contention between microservices on the serverless platform seriously affects their response latencies.Our investigation shows that the load of a microservice, the shared resource contentions on the serverless platform, and its sensitivities to the contention together affect the response latency of the microservice on the platform. To this end, we propose Amoeba, a runtime system that dynamically switches the deployment of a microservice. Amoeba is comprised of a contention-aware deployment controller, a hybrid execution engine, and a multi-resource contention monitor. The deployment controller predicts the tail latency of a microservice based on its load and the contention on the serverless platform, and determines the appropriate deployment of the microservice. The hybrid execution engine enables the quick switch of the two deploy modes. The contention monitor periodically quantifies the contention on multiple types of shared resources. Experimental results show that Amoeba is able to significantly reduce up to 72.9% of CPU usage and up to 84.9% of memory usage compared with the traditional pure IaaS-based deployment, while ensuring the required latency target.","PeriodicalId":6805,"journal":{"name":"2020 IEEE International Parallel and Distributed Processing Symposium (IPDPS)","volume":"156 1","pages":"399-408"},"PeriodicalIF":0.0000,"publicationDate":"2020-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"14","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 IEEE International Parallel and Distributed Processing Symposium (IPDPS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IPDPS47924.2020.00049","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 14

Abstract

While microservices that have stringent Quality-of-Service constraints are deployed in the Clouds, the long-term rented infrastructures that host the microservices are under-utilized except peak hours due to the diurnal load pattern. It is resource efficient for Cloud vendors and cost efficient for service maintainers to deploy the microservices in the long-term infrastructure at high load and in the serverless computing platform at low load. However, prior work fails to take advantage of the opportunity, because the contention between microservices on the serverless platform seriously affects their response latencies.Our investigation shows that the load of a microservice, the shared resource contentions on the serverless platform, and its sensitivities to the contention together affect the response latency of the microservice on the platform. To this end, we propose Amoeba, a runtime system that dynamically switches the deployment of a microservice. Amoeba is comprised of a contention-aware deployment controller, a hybrid execution engine, and a multi-resource contention monitor. The deployment controller predicts the tail latency of a microservice based on its load and the contention on the serverless platform, and determines the appropriate deployment of the microservice. The hybrid execution engine enables the quick switch of the two deploy modes. The contention monitor periodically quantifies the contention on multiple types of shared resources. Experimental results show that Amoeba is able to significantly reduce up to 72.9% of CPU usage and up to 84.9% of memory usage compared with the traditional pure IaaS-based deployment, while ensuring the required latency target.
变形虫:无服务器计算微服务的qos意识和减少资源使用
虽然在云中部署了具有严格服务质量约束的微服务,但由于昼夜负载模式的原因,除了高峰时段外,托管微服务的长期租用基础设施没有得到充分利用。将微服务部署在高负载的长期基础设施和低负载的无服务器计算平台上,对云供应商来说是资源高效的,对服务维护人员来说是成本高效的。然而,之前的工作未能利用这个机会,因为无服务器平台上的微服务之间的争用严重影响了它们的响应延迟。我们的研究表明,微服务的负载、无服务器平台上的共享资源争用及其对争用的敏感性共同影响平台上微服务的响应延迟。为此,我们提出了Amoeba,一个动态切换微服务部署的运行时系统。Amoeba由一个感知争用的部署控制器、一个混合执行引擎和一个多资源争用监视器组成。部署控制器根据微服务的负载和无服务器平台上的争用情况预测微服务的尾部延迟,并确定微服务的适当部署。混合执行引擎支持两种部署模式的快速切换。争用监视器定期量化多种类型共享资源上的争用。实验结果表明,与传统的纯iaas部署相比,Amoeba能够显著降低高达72.9%的CPU使用率和高达84.9%的内存使用率,同时确保所需的延迟目标。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信