C-LSM:协作日志结构合并树

Natasha Mittal, Faisal Nawab
{"title":"C-LSM:协作日志结构合并树","authors":"Natasha Mittal, Faisal Nawab","doi":"10.1145/3357223.3365443","DOIUrl":null,"url":null,"abstract":"The basic structure of the LSM[3] tree consists of four levels (we are considering only 4 levels), L0 in memory, and L1 to L3 in the disk. Compaction in L0/L1 is done through tiering, and compaction in the rest of the tree is done through leveling. Cooperative-LSM (C-LSM) is implemented by deconstructing the monolithic structure of LSM[3] trees to enhance the scalability of LSM trees by utilizing the resources of multiple machines in a more flexible way. The monolithic structure of LSM[3] tree lacks flexibility, and the only way to deal with an increased load on is to re-partition the data and distribute it across nodes. C-LSM comprises of three components - leader, compactor, and backup. Leader node receives write requests. It maintains Levels L0 and L1 of the LSM tree and performs minor compactions. Compactor maintains the rest of the levels (L2 and L3) and is responsible for compacting them. Backup maintains a copy of the entire LSM tree for fault-tolerance and read availability. The advantages that C-LSM provides are two-fold: • one can place components on different machines, and • one can have more than one instance of each component Running more than one instance for each component can enable various performance advantages: • Increasing the number of Leaders enables to digest data faster because the performance of a single machine no longer limits the system. • Increasing the number of Compactors enables to offload compaction[1] to more nodes and thus reduce the impact of compaction on other functions. • Increasing the number of backups increases read availability. Although, all these advantages can be achieved by re-partitioning the data and distributing the partitions across nodes, which most current LSM variants do. However, we hypothesize that partitioning is not feasible for all cases. For example, a dynamic workload where access patterns are unpredictable and no clear partitioning is feasible. In this case, the developer either has to endure the overhead of re-partitioning the data all the time or not be able to utilize the system resources efficiently if no re-partitioning is done. C-LSM enables scaling (and down-scaling) with less overhead compared to re-partitioning; if a partition is suddenly getting more requests, one can simply add a new component on another node. Each one of the components has different characteristics in terms of how it affects the workload and I/O. By having the flexibility to break down the components, one can find ways to distribute them in a way to increase overall efficiency. Having multiple instances of the three components leads to interesting challenges in terms of how to ensure that they work together without leading to any inconsistencies. We are trying to solve this through careful design of how these components interact and how to manage the decisions when failures or scaling events happen. Another interesting problem to solve is having multiple instances of C-LSM, each dedicated to one edge node or a cluster of edge nodes. For mobile-based or real-time data analysis applications, more and more data needs to be processed in edge nodes[2] itself and having a dedicated C-LSM will improve the overall latency. There are also some down-sides with more than one components that need to be addressed. For e.g., having more than one compaction server leads to the need for compaction across machines and/or redundancy of data, or having more than one leader needs to maintain a linearizable access.","PeriodicalId":91949,"journal":{"name":"Proceedings of the ... ACM Symposium on Cloud Computing [electronic resource] : SOCC ... ... SoCC (Conference)","volume":"170 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2019-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"C-LSM: Cooperative Log Structured Merge Trees\",\"authors\":\"Natasha Mittal, Faisal Nawab\",\"doi\":\"10.1145/3357223.3365443\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The basic structure of the LSM[3] tree consists of four levels (we are considering only 4 levels), L0 in memory, and L1 to L3 in the disk. Compaction in L0/L1 is done through tiering, and compaction in the rest of the tree is done through leveling. Cooperative-LSM (C-LSM) is implemented by deconstructing the monolithic structure of LSM[3] trees to enhance the scalability of LSM trees by utilizing the resources of multiple machines in a more flexible way. The monolithic structure of LSM[3] tree lacks flexibility, and the only way to deal with an increased load on is to re-partition the data and distribute it across nodes. C-LSM comprises of three components - leader, compactor, and backup. Leader node receives write requests. It maintains Levels L0 and L1 of the LSM tree and performs minor compactions. Compactor maintains the rest of the levels (L2 and L3) and is responsible for compacting them. Backup maintains a copy of the entire LSM tree for fault-tolerance and read availability. The advantages that C-LSM provides are two-fold: • one can place components on different machines, and • one can have more than one instance of each component Running more than one instance for each component can enable various performance advantages: • Increasing the number of Leaders enables to digest data faster because the performance of a single machine no longer limits the system. • Increasing the number of Compactors enables to offload compaction[1] to more nodes and thus reduce the impact of compaction on other functions. • Increasing the number of backups increases read availability. Although, all these advantages can be achieved by re-partitioning the data and distributing the partitions across nodes, which most current LSM variants do. However, we hypothesize that partitioning is not feasible for all cases. For example, a dynamic workload where access patterns are unpredictable and no clear partitioning is feasible. In this case, the developer either has to endure the overhead of re-partitioning the data all the time or not be able to utilize the system resources efficiently if no re-partitioning is done. C-LSM enables scaling (and down-scaling) with less overhead compared to re-partitioning; if a partition is suddenly getting more requests, one can simply add a new component on another node. Each one of the components has different characteristics in terms of how it affects the workload and I/O. By having the flexibility to break down the components, one can find ways to distribute them in a way to increase overall efficiency. Having multiple instances of the three components leads to interesting challenges in terms of how to ensure that they work together without leading to any inconsistencies. We are trying to solve this through careful design of how these components interact and how to manage the decisions when failures or scaling events happen. Another interesting problem to solve is having multiple instances of C-LSM, each dedicated to one edge node or a cluster of edge nodes. For mobile-based or real-time data analysis applications, more and more data needs to be processed in edge nodes[2] itself and having a dedicated C-LSM will improve the overall latency. There are also some down-sides with more than one components that need to be addressed. For e.g., having more than one compaction server leads to the need for compaction across machines and/or redundancy of data, or having more than one leader needs to maintain a linearizable access.\",\"PeriodicalId\":91949,\"journal\":{\"name\":\"Proceedings of the ... ACM Symposium on Cloud Computing [electronic resource] : SOCC ... ... SoCC (Conference)\",\"volume\":\"170 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2019-11-20\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the ... ACM Symposium on Cloud Computing [electronic resource] : SOCC ... ... SoCC (Conference)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3357223.3365443\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the ... ACM Symposium on Cloud Computing [electronic resource] : SOCC ... ... SoCC (Conference)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3357223.3365443","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

LSM[3]树的基本结构包括四个级别(我们只考虑4个级别),内存中的L0和磁盘中的L1到L3。L0/L1中的压缩是通过分层完成的,其余树中的压缩是通过调平完成的。C-LSM (Cooperative-LSM, C-LSM)是通过解构LSM[3]树的整体结构来实现的,通过更灵活地利用多台机器的资源来增强LSM树的可扩展性。LSM[3]树的整体结构缺乏灵活性,处理负载增加的唯一方法是对数据进行重新分区并跨节点分布。C-LSM由引线、压实机和备份三部分组成。Leader节点接收写请求。它维护LSM树的L0和L1级,并执行较小的压缩。Compactor维护其余的层(L2和L3)并负责压缩它们。备份维护整个LSM树的副本,以实现容错性和读可用性。C-LSM提供了双重优势:•可以将组件放置在不同的机器上,并且•每个组件可以有多个实例,每个组件运行多个实例可以实现各种性能优势:•增加leader的数量可以更快地消化数据,因为单个机器的性能不再限制系统。•增加压缩器的数量可以将压缩[1]卸载到更多的节点上,从而减少压缩对其他功能的影响。•增加备份数量可以提高读可用性。尽管如此,所有这些优点都可以通过重新划分数据并跨节点分布分区来实现,大多数当前的LSM变体都是这样做的。然而,我们假设分区并非对所有情况都可行。例如,在访问模式不可预测且没有明确分区的动态工作负载中是可行的。在这种情况下,开发人员要么必须一直忍受重新分区数据的开销,要么在不进行重新分区的情况下无法有效地利用系统资源。与重新分区相比,C-LSM支持以更少的开销进行扩展(和向下扩展);如果分区突然收到更多请求,可以简单地在另一个节点上添加一个新组件。就影响工作负载和I/O的方式而言,每个组件都具有不同的特征。通过灵活地分解组件,可以找到以提高整体效率的方式分配它们的方法。拥有这三个组件的多个实例会带来有趣的挑战,即如何确保它们一起工作而不会导致任何不一致。我们正试图通过仔细设计这些组件的交互方式,以及在发生故障或扩展事件时如何管理决策来解决这个问题。另一个需要解决的有趣问题是拥有多个C-LSM实例,每个实例专用于一个边缘节点或一个边缘节点集群。对于基于移动或实时的数据分析应用,越来越多的数据需要在边缘节点[2]本身进行处理,拥有专用的C-LSM将提高整体延迟。多个组件也有一些缺点需要解决。例如,拥有多个压缩服务器导致需要跨机器压缩和/或数据冗余,或者拥有多个leader需要维护线性化访问。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
C-LSM: Cooperative Log Structured Merge Trees
The basic structure of the LSM[3] tree consists of four levels (we are considering only 4 levels), L0 in memory, and L1 to L3 in the disk. Compaction in L0/L1 is done through tiering, and compaction in the rest of the tree is done through leveling. Cooperative-LSM (C-LSM) is implemented by deconstructing the monolithic structure of LSM[3] trees to enhance the scalability of LSM trees by utilizing the resources of multiple machines in a more flexible way. The monolithic structure of LSM[3] tree lacks flexibility, and the only way to deal with an increased load on is to re-partition the data and distribute it across nodes. C-LSM comprises of three components - leader, compactor, and backup. Leader node receives write requests. It maintains Levels L0 and L1 of the LSM tree and performs minor compactions. Compactor maintains the rest of the levels (L2 and L3) and is responsible for compacting them. Backup maintains a copy of the entire LSM tree for fault-tolerance and read availability. The advantages that C-LSM provides are two-fold: • one can place components on different machines, and • one can have more than one instance of each component Running more than one instance for each component can enable various performance advantages: • Increasing the number of Leaders enables to digest data faster because the performance of a single machine no longer limits the system. • Increasing the number of Compactors enables to offload compaction[1] to more nodes and thus reduce the impact of compaction on other functions. • Increasing the number of backups increases read availability. Although, all these advantages can be achieved by re-partitioning the data and distributing the partitions across nodes, which most current LSM variants do. However, we hypothesize that partitioning is not feasible for all cases. For example, a dynamic workload where access patterns are unpredictable and no clear partitioning is feasible. In this case, the developer either has to endure the overhead of re-partitioning the data all the time or not be able to utilize the system resources efficiently if no re-partitioning is done. C-LSM enables scaling (and down-scaling) with less overhead compared to re-partitioning; if a partition is suddenly getting more requests, one can simply add a new component on another node. Each one of the components has different characteristics in terms of how it affects the workload and I/O. By having the flexibility to break down the components, one can find ways to distribute them in a way to increase overall efficiency. Having multiple instances of the three components leads to interesting challenges in terms of how to ensure that they work together without leading to any inconsistencies. We are trying to solve this through careful design of how these components interact and how to manage the decisions when failures or scaling events happen. Another interesting problem to solve is having multiple instances of C-LSM, each dedicated to one edge node or a cluster of edge nodes. For mobile-based or real-time data analysis applications, more and more data needs to be processed in edge nodes[2] itself and having a dedicated C-LSM will improve the overall latency. There are also some down-sides with more than one components that need to be addressed. For e.g., having more than one compaction server leads to the need for compaction across machines and/or redundancy of data, or having more than one leader needs to maintain a linearizable access.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信