Deepak Jeswani, Manish Gupta, Pradipta De, Arpit Malani, U. Bellur
{"title":"Minimizing Latency in Serving Requests through Differential Template Caching in a Cloud","authors":"Deepak Jeswani, Manish Gupta, Pradipta De, Arpit Malani, U. Bellur","doi":"10.1109/CLOUD.2012.17","DOIUrl":null,"url":null,"abstract":"In Software-as-a-Service (SaaS) cloud delivery model, a hosting center deploys a Virtual Machine (VM) image template on a server on demand. Image templates are usually maintained in a central repository. With geographically dispersed hosting centers, time to transfer a large, often GigaByte sized, template file from the repository faces high latency due to low Internet bandwidth. An architecture that maintains a template cache, collocated with the hosting centers, can reduce request service latency. Since templates are large in size, caching complete templates is prohibitive in terms of storage space. In order to optimize cache space requirement, as well as, to reduce transfers from the repository, we propose a differential template caching technique, called DiffCache. A difference file or a patch between two templates, that have common components, is small in size. DiffCache computes an optimal selection of templates and patches based on the frequency of requests for specific templates. A template missing in the cache can be generated if any cached template can be patched with a cached patch file, thereby saving the transfer time from the repository at the cost of relatively small patching time. We show that patch based caching coupled with intelligent population of the cache can lead to a 90% improvement in service request latency when compared with caching only template files.","PeriodicalId":214084,"journal":{"name":"2012 IEEE Fifth International Conference on Cloud Computing","volume":"77 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2012-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"13","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2012 IEEE Fifth International Conference on Cloud Computing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CLOUD.2012.17","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 13
Abstract
In Software-as-a-Service (SaaS) cloud delivery model, a hosting center deploys a Virtual Machine (VM) image template on a server on demand. Image templates are usually maintained in a central repository. With geographically dispersed hosting centers, time to transfer a large, often GigaByte sized, template file from the repository faces high latency due to low Internet bandwidth. An architecture that maintains a template cache, collocated with the hosting centers, can reduce request service latency. Since templates are large in size, caching complete templates is prohibitive in terms of storage space. In order to optimize cache space requirement, as well as, to reduce transfers from the repository, we propose a differential template caching technique, called DiffCache. A difference file or a patch between two templates, that have common components, is small in size. DiffCache computes an optimal selection of templates and patches based on the frequency of requests for specific templates. A template missing in the cache can be generated if any cached template can be patched with a cached patch file, thereby saving the transfer time from the repository at the cost of relatively small patching time. We show that patch based caching coupled with intelligent population of the cache can lead to a 90% improvement in service request latency when compared with caching only template files.