B/sup mad/-tree: an efficient data structure for parallel processing

Sajal K. Das, M. Demuynck
{"title":"B/sup mad/-tree: an efficient data structure for parallel processing","authors":"Sajal K. Das, M. Demuynck","doi":"10.1109/SPDP.1996.570359","DOIUrl":null,"url":null,"abstract":"B-trees are used for accessing large database files, stored in lexicographic order on the secondary storage devices. Algorithms for concurrent B-tree data structures achieve only limited speedup when implemented on a parallel computer. To improve the performance, we propose a variant of the B/sup link/-tree, called the B/sup mad/-tree, which allows insertion without node splits, with multiple access in its leaf nodes, and dilation in both the index and the leaf nodes. Parallel algorithms for search, insert and restructuring are designed for partitioned, locked and distributed models. Only part of an insertion node is locked during the insert, and simultaneous insertions by multiple processors in the same node are allowed. A restructuring algorithm runs periodically in the background and requires at most one wait by any search or update operation. Our implementations demonstrate that the B/sup mad/-tree algorithms outperform the best known B/sup link/-trees, and compare favorably with linear hashing. We achieve good speedup (e.g., 4.79 with 8 processors) for partitioned algorithms, and moderate speedup (2.49 with 8 processors) for locked algorithms, even including overhead costs. The insert times obtained for B/sup mad/-trees are 50% to 60% less than that for the B/sup link/-trees in partitioned implementations, and 70% to 80% less in locked implementations. The speedup results on the distributed memory platform (a network of workstations) were not that encouraging due to high communication costs.","PeriodicalId":360478,"journal":{"name":"Proceedings of SPDP '96: 8th IEEE Symposium on Parallel and Distributed Processing","volume":"102 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1996-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of SPDP '96: 8th IEEE Symposium on Parallel and Distributed Processing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SPDP.1996.570359","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

Abstract

B-trees are used for accessing large database files, stored in lexicographic order on the secondary storage devices. Algorithms for concurrent B-tree data structures achieve only limited speedup when implemented on a parallel computer. To improve the performance, we propose a variant of the B/sup link/-tree, called the B/sup mad/-tree, which allows insertion without node splits, with multiple access in its leaf nodes, and dilation in both the index and the leaf nodes. Parallel algorithms for search, insert and restructuring are designed for partitioned, locked and distributed models. Only part of an insertion node is locked during the insert, and simultaneous insertions by multiple processors in the same node are allowed. A restructuring algorithm runs periodically in the background and requires at most one wait by any search or update operation. Our implementations demonstrate that the B/sup mad/-tree algorithms outperform the best known B/sup link/-trees, and compare favorably with linear hashing. We achieve good speedup (e.g., 4.79 with 8 processors) for partitioned algorithms, and moderate speedup (2.49 with 8 processors) for locked algorithms, even including overhead costs. The insert times obtained for B/sup mad/-trees are 50% to 60% less than that for the B/sup link/-trees in partitioned implementations, and 70% to 80% less in locked implementations. The speedup results on the distributed memory platform (a network of workstations) were not that encouraging due to high communication costs.
B/sup /-tree:用于并行处理的高效数据结构
b树用于访问按字典顺序存储在二级存储设备上的大型数据库文件。并行b树数据结构的算法在并行计算机上实现时只能获得有限的加速。为了提高性能,我们提出了B/sup link/-tree的一种变体,称为B/sup mad/-tree,它允许插入而不分裂节点,在其叶节点中具有多次访问,并且在索引和叶节点中都有扩展。针对分区模型、锁定模型和分布式模型,设计了搜索、插入和重构并行算法。在插入期间,只有插入节点的一部分被锁定,并且允许多个处理器在同一节点中同时插入。重构算法周期性地在后台运行,任何搜索或更新操作最多只需要等待一次。我们的实现表明,B/sup /-tree算法优于最著名的B/sup link/-tree算法,并且与线性哈希相比具有优势。对于分区算法,我们实现了很好的加速(例如,8个处理器时的4.79),对于锁定算法,我们实现了中等的加速(8个处理器时的2.49),甚至包括开销成本。在分区实现中,B/sup link/-树的插入时间比B/sup link/-树的插入时间少50%到60%,在锁定实现中则少70%到80%。由于通信成本高,分布式内存平台(一个工作站网络)上的加速结果并不是那么令人鼓舞。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信