Minimizing Latency in Serving Requests through Differential Template Caching in a Cloud

Deepak Jeswani, Manish Gupta, Pradipta De, Arpit Malani, U. Bellur
{"title":"Minimizing Latency in Serving Requests through Differential Template Caching in a Cloud","authors":"Deepak Jeswani, Manish Gupta, Pradipta De, Arpit Malani, U. Bellur","doi":"10.1109/CLOUD.2012.17","DOIUrl":null,"url":null,"abstract":"In Software-as-a-Service (SaaS) cloud delivery model, a hosting center deploys a Virtual Machine (VM) image template on a server on demand. Image templates are usually maintained in a central repository. With geographically dispersed hosting centers, time to transfer a large, often GigaByte sized, template file from the repository faces high latency due to low Internet bandwidth. An architecture that maintains a template cache, collocated with the hosting centers, can reduce request service latency. Since templates are large in size, caching complete templates is prohibitive in terms of storage space. In order to optimize cache space requirement, as well as, to reduce transfers from the repository, we propose a differential template caching technique, called DiffCache. A difference file or a patch between two templates, that have common components, is small in size. DiffCache computes an optimal selection of templates and patches based on the frequency of requests for specific templates. A template missing in the cache can be generated if any cached template can be patched with a cached patch file, thereby saving the transfer time from the repository at the cost of relatively small patching time. We show that patch based caching coupled with intelligent population of the cache can lead to a 90% improvement in service request latency when compared with caching only template files.","PeriodicalId":214084,"journal":{"name":"2012 IEEE Fifth International Conference on Cloud Computing","volume":"77 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2012-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"13","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2012 IEEE Fifth International Conference on Cloud Computing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CLOUD.2012.17","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 13

Abstract

In Software-as-a-Service (SaaS) cloud delivery model, a hosting center deploys a Virtual Machine (VM) image template on a server on demand. Image templates are usually maintained in a central repository. With geographically dispersed hosting centers, time to transfer a large, often GigaByte sized, template file from the repository faces high latency due to low Internet bandwidth. An architecture that maintains a template cache, collocated with the hosting centers, can reduce request service latency. Since templates are large in size, caching complete templates is prohibitive in terms of storage space. In order to optimize cache space requirement, as well as, to reduce transfers from the repository, we propose a differential template caching technique, called DiffCache. A difference file or a patch between two templates, that have common components, is small in size. DiffCache computes an optimal selection of templates and patches based on the frequency of requests for specific templates. A template missing in the cache can be generated if any cached template can be patched with a cached patch file, thereby saving the transfer time from the repository at the cost of relatively small patching time. We show that patch based caching coupled with intelligent population of the cache can lead to a 90% improvement in service request latency when compared with caching only template files.
通过云中的差异模板缓存来最小化服务请求的延迟
在SaaS (Software-as-a-Service)云交付模式中,托管中心根据需要在服务器上部署虚拟机(Virtual Machine)镜像模板。映像模板通常在中央存储库中维护。对于地理上分散的托管中心,由于Internet带宽较低,从存储库传输大型(通常是gb大小)模板文件的时间面临着高延迟。维护模板缓存的体系结构与托管中心并置,可以减少请求服务延迟。由于模板的大小很大,缓存完整的模板在存储空间方面是令人望而却步的。为了优化缓存空间需求,以及减少存储库的传输,我们提出了一种称为DiffCache的差分模板缓存技术。具有相同组件的两个模板之间的差异文件或补丁的大小很小。DiffCache根据对特定模板的请求频率计算模板和补丁的最佳选择。如果可以使用缓存的补丁文件修补任何缓存的模板,则可以生成缓存中缺失的模板,从而以相对较小的补丁时间为代价,节省从存储库传输的时间。我们表明,与仅缓存模板文件相比,基于补丁的缓存与缓存的智能填充相结合,可以将服务请求延迟提高90%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信