The Case for RackOut: Scalable Data Serving Using Rack-Scale Systems

Stanko Novakovic, Alexandros Daglis, Edouard Bugnion, Babak Falsafi, Boris Grot
{"title":"The Case for RackOut: Scalable Data Serving Using Rack-Scale Systems","authors":"Stanko Novakovic, Alexandros Daglis, Edouard Bugnion, Babak Falsafi, Boris Grot","doi":"10.1145/2987550.2987577","DOIUrl":null,"url":null,"abstract":"To provide low latency and high throughput guarantees, most large key-value stores keep the data in the memory of many servers. Despite the natural parallelism across lookups, the load imbalance, introduced by heavy skew in the popularity distribution of keys, limits performance. To avoid violating tail latency service-level objectives, systems tend to keep server utilization low and organize the data in micro-shards, which provides units of migration and replication for the purpose of load balancing. These techniques reduce the skew, but incur additional monitoring, data replication and consistency maintenance overheads. In this work, we introduce RackOut, a memory pooling technique that leverages the one-sided remote read primitive of emerging rack-scale systems to mitigate load imbalance while respecting service-level objectives. In RackOut, the data is aggregated at rack-scale granularity, with all of the participating servers in the rack jointly servicing all of the rack's micro-shards. We develop a queuing model to evaluate the impact of RackOut at the datacenter scale. In addition, we implement a RackOut proof-of-concept key-value store, evaluate it on two experimental platforms based on RDMA and Scale-Out NUMA, and use these results to validate the model. Our results show that RackOut can increase throughput up to 6x for RDMA and 8.6x for Scale-Out NUMA compared to a scale-out deployment, while respecting tight tail latency service-level objectives.","PeriodicalId":362207,"journal":{"name":"Proceedings of the Seventh ACM Symposium on Cloud Computing","volume":"70 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2016-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"35","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the Seventh ACM Symposium on Cloud Computing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/2987550.2987577","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 35

Abstract

To provide low latency and high throughput guarantees, most large key-value stores keep the data in the memory of many servers. Despite the natural parallelism across lookups, the load imbalance, introduced by heavy skew in the popularity distribution of keys, limits performance. To avoid violating tail latency service-level objectives, systems tend to keep server utilization low and organize the data in micro-shards, which provides units of migration and replication for the purpose of load balancing. These techniques reduce the skew, but incur additional monitoring, data replication and consistency maintenance overheads. In this work, we introduce RackOut, a memory pooling technique that leverages the one-sided remote read primitive of emerging rack-scale systems to mitigate load imbalance while respecting service-level objectives. In RackOut, the data is aggregated at rack-scale granularity, with all of the participating servers in the rack jointly servicing all of the rack's micro-shards. We develop a queuing model to evaluate the impact of RackOut at the datacenter scale. In addition, we implement a RackOut proof-of-concept key-value store, evaluate it on two experimental platforms based on RDMA and Scale-Out NUMA, and use these results to validate the model. Our results show that RackOut can increase throughput up to 6x for RDMA and 8.6x for Scale-Out NUMA compared to a scale-out deployment, while respecting tight tail latency service-level objectives.
RackOut案例:使用机架级系统的可扩展数据服务
为了提供低延迟和高吞吐量保证,大多数大型键值存储将数据保存在许多服务器的内存中。尽管跨查找具有自然的并行性,但由键流行度分布的严重倾斜引入的负载不平衡限制了性能。为了避免违反尾部延迟服务水平目标,系统倾向于保持低服务器利用率,并将数据组织在微分片中,微分片提供迁移和复制单元,以实现负载平衡。这些技术减少了倾斜,但会带来额外的监控、数据复制和一致性维护开销。在这项工作中,我们介绍了RackOut,这是一种内存池技术,它利用新兴机架规模系统的单侧远程读取原语来缓解负载不平衡,同时尊重服务级目标。在RackOut中,数据以机架级粒度聚合,机架中的所有参与服务器共同为机架的所有微分片提供服务。我们开发了一个排队模型来评估RackOut在数据中心规模上的影响。此外,我们实现了一个RackOut概念验证键值存储,在基于RDMA和Scale-Out NUMA的两个实验平台上对其进行了评估,并使用这些结果来验证模型。我们的结果表明,与横向扩展部署相比,RackOut可以将RDMA的吞吐量提高6倍,将Scale-Out NUMA的吞吐量提高8.6倍,同时尊重紧密的尾部延迟服务级别目标。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信