Dheeraj Chahal, S. Palepu, Mayank Mishra, Rekha Singhal
{"title":"使用混合云服务支持sla的工作负载调度","authors":"Dheeraj Chahal, S. Palepu, Mayank Mishra, Rekha Singhal","doi":"10.1145/3452413.3464789","DOIUrl":null,"url":null,"abstract":"Cloud services have an auto-scaling feature for load balancing to meet the performance requirements of an application. Existing auto-scaling techniques are based on upscaling and downscaling cloud resources to distribute the dynamically varying workloads. However, bursty workloads pose many challenges for auto-scaling and sometimes result in Service Level Agreement (SLA) violations. Furthermore, over-provisioning or under-provisioning cloud resources to address dynamically evolving workloads results in performance degradation and cost escalation. In this work, we present a workload characterization based approach for scheduling the bursty workload on a highly scalable serverless architecture in conjunction with a machine learning (ML) platform. We present the use of Amazon Web Services (AWS) ML platform SageMaker and serverless computing platform Lambda for load balancing the inference workload to avoid SLA violations. We evaluate our approach using a recommender system that is based on a deep learning model for inference.","PeriodicalId":339058,"journal":{"name":"Proceedings of the 1st Workshop on High Performance Serverless Computing","volume":"6 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"6","resultStr":"{\"title\":\"SLA-aware Workload Scheduling Using Hybrid Cloud Services\",\"authors\":\"Dheeraj Chahal, S. Palepu, Mayank Mishra, Rekha Singhal\",\"doi\":\"10.1145/3452413.3464789\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Cloud services have an auto-scaling feature for load balancing to meet the performance requirements of an application. Existing auto-scaling techniques are based on upscaling and downscaling cloud resources to distribute the dynamically varying workloads. However, bursty workloads pose many challenges for auto-scaling and sometimes result in Service Level Agreement (SLA) violations. Furthermore, over-provisioning or under-provisioning cloud resources to address dynamically evolving workloads results in performance degradation and cost escalation. In this work, we present a workload characterization based approach for scheduling the bursty workload on a highly scalable serverless architecture in conjunction with a machine learning (ML) platform. We present the use of Amazon Web Services (AWS) ML platform SageMaker and serverless computing platform Lambda for load balancing the inference workload to avoid SLA violations. We evaluate our approach using a recommender system that is based on a deep learning model for inference.\",\"PeriodicalId\":339058,\"journal\":{\"name\":\"Proceedings of the 1st Workshop on High Performance Serverless Computing\",\"volume\":\"6 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-06-25\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"6\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 1st Workshop on High Performance Serverless Computing\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3452413.3464789\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 1st Workshop on High Performance Serverless Computing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3452413.3464789","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 6
摘要
云服务具有用于负载平衡的自动扩展特性,以满足应用程序的性能需求。现有的自动扩展技术是基于升级和缩减云资源来分配动态变化的工作负载。然而,突发工作负载给自动扩展带来了许多挑战,有时会导致违反服务水平协议(SLA)。此外,为解决动态发展的工作负载而过度配置或配置不足的云资源会导致性能下降和成本上升。在这项工作中,我们提出了一种基于工作负载特征的方法,用于与机器学习(ML)平台一起在高度可扩展的无服务器架构上调度突发工作负载。我们介绍了使用Amazon Web Services (AWS) ML平台SageMaker和无服务器计算平台Lambda来负载平衡推理工作负载,以避免违反SLA。我们使用基于深度学习推理模型的推荐系统来评估我们的方法。
SLA-aware Workload Scheduling Using Hybrid Cloud Services
Cloud services have an auto-scaling feature for load balancing to meet the performance requirements of an application. Existing auto-scaling techniques are based on upscaling and downscaling cloud resources to distribute the dynamically varying workloads. However, bursty workloads pose many challenges for auto-scaling and sometimes result in Service Level Agreement (SLA) violations. Furthermore, over-provisioning or under-provisioning cloud resources to address dynamically evolving workloads results in performance degradation and cost escalation. In this work, we present a workload characterization based approach for scheduling the bursty workload on a highly scalable serverless architecture in conjunction with a machine learning (ML) platform. We present the use of Amazon Web Services (AWS) ML platform SageMaker and serverless computing platform Lambda for load balancing the inference workload to avoid SLA violations. We evaluate our approach using a recommender system that is based on a deep learning model for inference.