Designing Efficient Cooperative Caching Schemes for Multi-Tier Data-Centers over RDMA-enabled Networks

S. Narravula, Hyun-Wook Jin, K. Vaidyanathan, D. Panda
{"title":"Designing Efficient Cooperative Caching Schemes for Multi-Tier Data-Centers over RDMA-enabled Networks","authors":"S. Narravula, Hyun-Wook Jin, K. Vaidyanathan, D. Panda","doi":"10.1109/CCGRID.2006.33","DOIUrl":null,"url":null,"abstract":"Caching has been a very important technique in improving the performance and scalability of web-serving datacenters. The research community has proposed cooperation of caching servers to achieve higher performance benefits. These existing cooperative caching mechanisms often partially duplicate the cached data redundantly on multiple servers for higher performance (by optimizing the datafetch costs for multiple similar requests). With the advent of RDMA enabled interconnects these basic data-fetch cost estimates have changed significantly. Further, the effective utilization of the vast resources available across multiple tiers in today’s data-centers is of obvious interest. Hence, a systematic study of these various issues involved is of paramount importance. In this paper, we present several cooperative caching schemes that are designed to benefit in the light of the above mentioned trends. In particular, we design schemes that take advantage of the RDMA capabilities of networks and the multitude of resources available in modern multi-tier data-centers. Our designs are implemented on InfiniBand based clusters to work in conjunction with Apache based servers. Our experimental results show that our schemes achieve a throughput improvement of up to 35% as compared to the basic cooperative caching schemes and 180% better than the simple single node caching schemes. Our experimental results lead us to a new scheme which can deliver good performance in many Caching has been a very important technique in improving the performance and scalability of web-serving datacenters. The research community has proposed cooperation of caching servers to achieve higher performance benefits. These existing cooperative caching mechanisms often partially duplicate the cached data redundantly on multiple servers for higher performance (by optimizing the datafetch costs for multiple similar requests). With the advent of RDMA enabled interconnects these basic data-fetch cost estimates have changed significantly. Further, the effective utilization of the vast resources available across multiple tiers in today’s data-centers is of obvious interest. Hence, a systematic study of these various issues involved is of paramount importance. In this paper, we present several cooperative caching schemes that are designed to benefit in the light of the above mentioned trends. In particular, we design schemes that take advantage of the RDMA capabilities of networks and the multitude of resources available in modern multi-tier data-centers. Our designs are implemented on InfiniBand based clusters to work in conjunction with Apache based servers. Our experimental results show that our schemes achieve a throughput improvement of up to 35% as compared to the basic cooperative caching schemes and 180% better than the simple single node caching schemes. Our experimental results lead us to a new scheme which can deliver good performance in many scenarios.","PeriodicalId":419226,"journal":{"name":"Sixth IEEE International Symposium on Cluster Computing and the Grid (CCGRID'06)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2006-05-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"15","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Sixth IEEE International Symposium on Cluster Computing and the Grid (CCGRID'06)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CCGRID.2006.33","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 15

Abstract

Caching has been a very important technique in improving the performance and scalability of web-serving datacenters. The research community has proposed cooperation of caching servers to achieve higher performance benefits. These existing cooperative caching mechanisms often partially duplicate the cached data redundantly on multiple servers for higher performance (by optimizing the datafetch costs for multiple similar requests). With the advent of RDMA enabled interconnects these basic data-fetch cost estimates have changed significantly. Further, the effective utilization of the vast resources available across multiple tiers in today’s data-centers is of obvious interest. Hence, a systematic study of these various issues involved is of paramount importance. In this paper, we present several cooperative caching schemes that are designed to benefit in the light of the above mentioned trends. In particular, we design schemes that take advantage of the RDMA capabilities of networks and the multitude of resources available in modern multi-tier data-centers. Our designs are implemented on InfiniBand based clusters to work in conjunction with Apache based servers. Our experimental results show that our schemes achieve a throughput improvement of up to 35% as compared to the basic cooperative caching schemes and 180% better than the simple single node caching schemes. Our experimental results lead us to a new scheme which can deliver good performance in many Caching has been a very important technique in improving the performance and scalability of web-serving datacenters. The research community has proposed cooperation of caching servers to achieve higher performance benefits. These existing cooperative caching mechanisms often partially duplicate the cached data redundantly on multiple servers for higher performance (by optimizing the datafetch costs for multiple similar requests). With the advent of RDMA enabled interconnects these basic data-fetch cost estimates have changed significantly. Further, the effective utilization of the vast resources available across multiple tiers in today’s data-centers is of obvious interest. Hence, a systematic study of these various issues involved is of paramount importance. In this paper, we present several cooperative caching schemes that are designed to benefit in the light of the above mentioned trends. In particular, we design schemes that take advantage of the RDMA capabilities of networks and the multitude of resources available in modern multi-tier data-centers. Our designs are implemented on InfiniBand based clusters to work in conjunction with Apache based servers. Our experimental results show that our schemes achieve a throughput improvement of up to 35% as compared to the basic cooperative caching schemes and 180% better than the simple single node caching schemes. Our experimental results lead us to a new scheme which can deliver good performance in many scenarios.
基于rdma网络的多层数据中心高效协同缓存方案设计
缓存在提高web服务数据中心的性能和可伸缩性方面是一项非常重要的技术。研究团体提出了缓存服务器的合作,以获得更高的性能优势。这些现有的协作缓存机制通常在多个服务器上冗余地部分复制缓存的数据,以获得更高的性能(通过优化多个类似请求的数据提取成本)。随着支持RDMA的互连的出现,这些基本的数据获取成本估计已经发生了重大变化。此外,在当今的数据中心中,有效地利用跨多个层的大量可用资源显然具有重要意义。因此,对所涉及的各种问题进行系统的研究是至关重要的。在本文中,我们提出了几个合作缓存方案,这些方案是根据上述趋势设计的。特别是,我们设计了利用网络的RDMA功能和现代多层数据中心中可用的大量资源的方案。我们的设计是在InfiniBand集群上实现的,与基于Apache的服务器一起工作。实验结果表明,与基本的协同缓存方案相比,我们的方案的吞吐量提高了35%,比简单的单节点缓存方案提高了180%。我们的实验结果使我们得到了一种新的方案,它可以在许多情况下提供良好的性能。缓存在提高web服务数据中心的性能和可扩展性方面一直是一种非常重要的技术。研究团体提出了缓存服务器的合作,以获得更高的性能优势。这些现有的协作缓存机制通常在多个服务器上冗余地部分复制缓存的数据,以获得更高的性能(通过优化多个类似请求的数据提取成本)。随着支持RDMA的互连的出现,这些基本的数据获取成本估计已经发生了重大变化。此外,在当今的数据中心中,有效地利用跨多个层的大量可用资源显然具有重要意义。因此,对所涉及的各种问题进行系统的研究是至关重要的。在本文中,我们提出了几个合作缓存方案,这些方案是根据上述趋势设计的。特别是,我们设计了利用网络的RDMA功能和现代多层数据中心中可用的大量资源的方案。我们的设计是在InfiniBand集群上实现的,与基于Apache的服务器一起工作。实验结果表明,与基本的协同缓存方案相比,我们的方案的吞吐量提高了35%,比简单的单节点缓存方案提高了180%。我们的实验结果使我们的新方案在许多场景下都能提供良好的性能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信