{"title":"预测云中容器化微服务的端到端尾部延迟","authors":"Joy Rahman, P. Lama","doi":"10.1109/IC2E.2019.00034","DOIUrl":null,"url":null,"abstract":"Large-scale web services are increasingly adopting cloud-native principles of application design to better utilize the advantages of cloud computing. This involves building an application using many loosely coupled service-specific components (microservices) that communicate via lightweight APIs, and utilizing containerization technologies to deploy, update, and scale these microservices quickly and independently. However, managing the end-to-end tail latency of requests flowing through the microservices is challenging in the absence of accurate performance models that can capture the complex interplay of microservice workflows with cloudinduced performance variability and inter-service performance dependencies. In this paper, we present performance characterization and modeling of containerized microservices in the cloud. Our modeling approach aims at enabling cloud platforms to combine resource usage metrics collected from multiple layers of the cloud environment, and apply machine learning techniques to predict the end-to-end tail latency of microservice workflows. We implemented and evaluated our modeling approach on NSF Cloud's Chameleon testbed using KVM for virtualization, Docker Engine for containerization and Kubernetes for container orchestration. Experimental results with an open-source microservices benchmark, Sock Shop, show that our modeling approach achieves high prediction accuracy even in the presence of multi-tenant performance interference.","PeriodicalId":226094,"journal":{"name":"2019 IEEE International Conference on Cloud Engineering (IC2E)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"28","resultStr":"{\"title\":\"Predicting the End-to-End Tail Latency of Containerized Microservices in the Cloud\",\"authors\":\"Joy Rahman, P. Lama\",\"doi\":\"10.1109/IC2E.2019.00034\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Large-scale web services are increasingly adopting cloud-native principles of application design to better utilize the advantages of cloud computing. This involves building an application using many loosely coupled service-specific components (microservices) that communicate via lightweight APIs, and utilizing containerization technologies to deploy, update, and scale these microservices quickly and independently. However, managing the end-to-end tail latency of requests flowing through the microservices is challenging in the absence of accurate performance models that can capture the complex interplay of microservice workflows with cloudinduced performance variability and inter-service performance dependencies. In this paper, we present performance characterization and modeling of containerized microservices in the cloud. Our modeling approach aims at enabling cloud platforms to combine resource usage metrics collected from multiple layers of the cloud environment, and apply machine learning techniques to predict the end-to-end tail latency of microservice workflows. We implemented and evaluated our modeling approach on NSF Cloud's Chameleon testbed using KVM for virtualization, Docker Engine for containerization and Kubernetes for container orchestration. Experimental results with an open-source microservices benchmark, Sock Shop, show that our modeling approach achieves high prediction accuracy even in the presence of multi-tenant performance interference.\",\"PeriodicalId\":226094,\"journal\":{\"name\":\"2019 IEEE International Conference on Cloud Engineering (IC2E)\",\"volume\":\"1 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2019-06-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"28\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2019 IEEE International Conference on Cloud Engineering (IC2E)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/IC2E.2019.00034\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 IEEE International Conference on Cloud Engineering (IC2E)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IC2E.2019.00034","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Predicting the End-to-End Tail Latency of Containerized Microservices in the Cloud
Large-scale web services are increasingly adopting cloud-native principles of application design to better utilize the advantages of cloud computing. This involves building an application using many loosely coupled service-specific components (microservices) that communicate via lightweight APIs, and utilizing containerization technologies to deploy, update, and scale these microservices quickly and independently. However, managing the end-to-end tail latency of requests flowing through the microservices is challenging in the absence of accurate performance models that can capture the complex interplay of microservice workflows with cloudinduced performance variability and inter-service performance dependencies. In this paper, we present performance characterization and modeling of containerized microservices in the cloud. Our modeling approach aims at enabling cloud platforms to combine resource usage metrics collected from multiple layers of the cloud environment, and apply machine learning techniques to predict the end-to-end tail latency of microservice workflows. We implemented and evaluated our modeling approach on NSF Cloud's Chameleon testbed using KVM for virtualization, Docker Engine for containerization and Kubernetes for container orchestration. Experimental results with an open-source microservices benchmark, Sock Shop, show that our modeling approach achieves high prediction accuracy even in the presence of multi-tenant performance interference.