{"title":"最小化无服务器部署中的冷启动时间","authors":"Daniyaal Khan, Basant Subba, Sangeeta Sharma","doi":"10.1145/3549206.3549234","DOIUrl":null,"url":null,"abstract":"Serverless deployments of Cloud Applications involve containerizing an application that then remains dormant(cold) until a trigger event like a user visiting an endpoint occurs. The host machine then provisions this dormant container into a Virtual Machine that serves the request and then stays idle, waiting for subsequent requests to come in(warm). While the performance for requests made while a container is warm is indistinguishable from a fully managed server stack, requests when a container is cold can take several seconds because of the overheads involved in VM provisioning. The time at which a container goes from warm to cold is decided by the host VM depending on existing load and it’s configuration. This paper aims to come up with methods to reduce the frequency and duration of cold starts occurring across different workloads and cloud providers. By changing base images, lazy loading I/O and DB initializations and modifying CPU capacity the cold start times on GCP Cloud Run were reduced by upto 5% and 10.5% for simple and database dependent workloads respectively.","PeriodicalId":199675,"journal":{"name":"Proceedings of the 2022 Fourteenth International Conference on Contemporary Computing","volume":"15 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-08-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Minimizing Cold Start Times in Serverless Deployments\",\"authors\":\"Daniyaal Khan, Basant Subba, Sangeeta Sharma\",\"doi\":\"10.1145/3549206.3549234\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Serverless deployments of Cloud Applications involve containerizing an application that then remains dormant(cold) until a trigger event like a user visiting an endpoint occurs. The host machine then provisions this dormant container into a Virtual Machine that serves the request and then stays idle, waiting for subsequent requests to come in(warm). While the performance for requests made while a container is warm is indistinguishable from a fully managed server stack, requests when a container is cold can take several seconds because of the overheads involved in VM provisioning. The time at which a container goes from warm to cold is decided by the host VM depending on existing load and it’s configuration. This paper aims to come up with methods to reduce the frequency and duration of cold starts occurring across different workloads and cloud providers. By changing base images, lazy loading I/O and DB initializations and modifying CPU capacity the cold start times on GCP Cloud Run were reduced by upto 5% and 10.5% for simple and database dependent workloads respectively.\",\"PeriodicalId\":199675,\"journal\":{\"name\":\"Proceedings of the 2022 Fourteenth International Conference on Contemporary Computing\",\"volume\":\"15 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-08-04\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 2022 Fourteenth International Conference on Contemporary Computing\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3549206.3549234\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2022 Fourteenth International Conference on Contemporary Computing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3549206.3549234","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Minimizing Cold Start Times in Serverless Deployments
Serverless deployments of Cloud Applications involve containerizing an application that then remains dormant(cold) until a trigger event like a user visiting an endpoint occurs. The host machine then provisions this dormant container into a Virtual Machine that serves the request and then stays idle, waiting for subsequent requests to come in(warm). While the performance for requests made while a container is warm is indistinguishable from a fully managed server stack, requests when a container is cold can take several seconds because of the overheads involved in VM provisioning. The time at which a container goes from warm to cold is decided by the host VM depending on existing load and it’s configuration. This paper aims to come up with methods to reduce the frequency and duration of cold starts occurring across different workloads and cloud providers. By changing base images, lazy loading I/O and DB initializations and modifying CPU capacity the cold start times on GCP Cloud Run were reduced by upto 5% and 10.5% for simple and database dependent workloads respectively.