Proc. VLDB Endow.最新文献

筛选
英文 中文
The FastLanes Compression Layout: Decoding >100 Billion Integers per Second with Scalar Code FastLanes压缩布局:用标量码每秒解码> 1000亿个整数
Proc. VLDB Endow. Pub Date : 2023-05-01 DOI: 10.14778/3598581.3598587
Azim Afroozeh, P. Boncz
{"title":"The FastLanes Compression Layout: Decoding >100 Billion Integers per Second with Scalar Code","authors":"Azim Afroozeh, P. Boncz","doi":"10.14778/3598581.3598587","DOIUrl":"https://doi.org/10.14778/3598581.3598587","url":null,"abstract":"\u0000 The open-source FastLanes project aims to improve big data formats, such as Parquet, ORC and columnar database formats, in multiple ways. In this paper, we significantly accelerate decoding of all common Light-Weight Compression (LWC) schemes: DICT, FOR, DELTA and RLE through better data-parallelism. We do so by re-designing the compression layout using two main ideas: (i) generalizing the\u0000 value interleaving\u0000 technique in the basic operation of bit-(un)packing by targeting a virtual 1024-bits SIMD register, (ii) reordering the tuples in all columns of a table in the same Unified Transposed Layout that puts tuple chunks in a common \"04261537\" order (explained in the paper); allowing for maximum independent work for all possible basic SIMD lane widths: 8, 16, 32, and 64 bits.\u0000 \u0000 We address the software development, maintenance and future-proofness challenges of increasing hardware diversity, by defining a virtual 1024-bits instruction set that consists of simple operators supported by all SIMD dialects; and also, importantly, by scalar code. The interleaved and tuple-reordered layout actually makes scalar decoding faster, extracting more data-parallelism from today's wide-issue CPUs. Importantly, the scalar version can be fully auto-vectorized by modern compilers, eliminating technical debt in software caused by platform-specific SIMD intrinsics.\u0000 Micro-benchmarks on Intel, AMD, Apple and AWS CPUs show that FastLanes accelerates decoding by factors (decoding >40 values per CPU cycle). FastLanes can make queries faster, as compressing the data reduces bandwidth needs, while decoding is almost free.","PeriodicalId":20467,"journal":{"name":"Proc. VLDB Endow.","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87752745","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
TiQuE: Improving the Transactional Performance of Analytical Systems for True Hybrid Workloads TiQuE:为真正的混合工作负载提高分析系统的事务性能
Proc. VLDB Endow. Pub Date : 2023-05-01 DOI: 10.14778/3598581.3598598
Nuno Faria, J. Pereira, A. Alonso, R. Vilaça, Yunus Koning, N. Nes
{"title":"TiQuE: Improving the Transactional Performance of Analytical Systems for True Hybrid Workloads","authors":"Nuno Faria, J. Pereira, A. Alonso, R. Vilaça, Yunus Koning, N. Nes","doi":"10.14778/3598581.3598598","DOIUrl":"https://doi.org/10.14778/3598581.3598598","url":null,"abstract":"Transactions have been a key issue in database management for a long time and there are a plethora of architectures and algorithms to support and implement them. The current state-of-the-art is focused on storage management and is tightly coupled with its design, leading, for instance, to the need for completely new engines to support new features such as Hybrid Transactional Analytical Processing (HTAP). We address this challenge with a proposal to implement transactional logic in a query language such as SQL. This means that our approach can be layered on existing analytical systems but that the retrieval of a transactional snapshot and the validation of update transactions runs in the server and can take advantage of advanced query execution capabilities of an optimizing query engine. We demonstrate our proposal, TiQuE, on MonetDB and obtain an average 500x improvement in transactional throughput while retaining good performance on analytical queries, making it competitive with the state-of-the-art HTAP systems.","PeriodicalId":20467,"journal":{"name":"Proc. VLDB Endow.","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74976528","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SDPipe: A Semi-Decentralized Framework for Heterogeneity-aware Pipeline-parallel Training SDPipe:一个半分散的异构感知管道并行训练框架
Proc. VLDB Endow. Pub Date : 2023-05-01 DOI: 10.14778/3598581.3598604
Xupeng Miao, Yining Shi, Zhi Yang, Bin Cui, Zhihao Jia
{"title":"SDPipe: A Semi-Decentralized Framework for Heterogeneity-aware Pipeline-parallel Training","authors":"Xupeng Miao, Yining Shi, Zhi Yang, Bin Cui, Zhihao Jia","doi":"10.14778/3598581.3598604","DOIUrl":"https://doi.org/10.14778/3598581.3598604","url":null,"abstract":"\u0000 The increasing size of both deep learning models and training data necessitates the ability to scale out model training through pipeline-parallel training, which combines pipelined model parallelism and data parallelism. However, most of them assume an ideal homogeneous dedicated cluster. As for real cloud clusters, these approaches suffer from the intensive model synchronization overheads due to the dynamic environment heterogeneity. Such a huge challenge leaves the design in a dilemma: either the performance bottleneck of the central parameter server (PS) or severe performance degradation caused by stragglers for decentralized synchronization (like All-Reduce). This approach presents SDPipe, a new\u0000 semi-decentralized\u0000 framework to get the best of both worlds, achieving both high heterogeneity tolerance and convergence efficiency in pipeline-parallel training. To provide high performance, we decentralize the communication model synchronization, which accounts for the largest proportion of synchronization overhead. In contrast, we centralize the process of group scheduling, which is lightweight but needs a global view for better performance and convergence speed against heterogeneity. We show via a prototype implementation the significant advantage of SDPipe on performance and scalability, facing different environments.\u0000","PeriodicalId":20467,"journal":{"name":"Proc. VLDB Endow.","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76849676","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
LRU-C: Parallelizing Database I/Os for Flash SSDs LRU-C:为Flash ssd并行化数据库I/ o
Proc. VLDB Endow. Pub Date : 2023-05-01 DOI: 10.14778/3598581.3598605
Bo-Hyun Lee, Mijin An, Sang-Won Lee
{"title":"LRU-C: Parallelizing Database I/Os for Flash SSDs","authors":"Bo-Hyun Lee, Mijin An, Sang-Won Lee","doi":"10.14778/3598581.3598605","DOIUrl":"https://doi.org/10.14778/3598581.3598605","url":null,"abstract":"\u0000 The conventional database buffer managers have two inherent sources of I/O serialization: read stall and mutex conflict. The serialized I/O makes storage and CPU under-utilized, limiting transaction throughput and latency. Such harm stands out on flash SSDs with asymmetric read-write speed and abundant I/O parallelism. To make database I/Os parallel and thus leverage the parallelism in flash SSDs, we propose a novel approach to database buffering, the\u0000 LRU-C\u0000 method. It introduces the LRU-C pointer that points to the\u0000 least-recently-used-clean\u0000 page in the LRU list. Upon a page miss, LRU-C selects the current LRU-clean page as a victim and adjusts the pointer to the next LRU-clean one in the LRU list. This way, LRU-C can avoid the I/O serialization of read stalls. The LRU-C pointer enables two further optimizations for higher I/O throughput:\u0000 dynamic-batch-write\u0000 and\u0000 parallel LRU-list manipulation.\u0000 The former allows the background flusher to write more dirty pages at a time, while the latter mitigates mutex-induced I/O serializations. Experiment results from running OLTP workloads using MySQL-based LRU-C prototype on flash SSDs show that it improves transaction throughput compared to the Vanilla MySQL and the state-of-the-art WAR solution by 3x and 1.52x, respectively, and also cuts the tail latency drastically. Though LRU-C might compromise the hit ratio slightly, its increased I/O throughput far offsets the reduced hit ratio.\u0000","PeriodicalId":20467,"journal":{"name":"Proc. VLDB Endow.","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90857233","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Towards Designing and Learning Piecewise Space-Filling Curves 分段空间填充曲线的设计与学习
Proc. VLDB Endow. Pub Date : 2023-05-01 DOI: 10.14778/3598581.3598589
Jiangneng Li, Zheng Wang, Gao Cong, Cheng Long, H. M. Kiah, Bin Cui
{"title":"Towards Designing and Learning Piecewise Space-Filling Curves","authors":"Jiangneng Li, Zheng Wang, Gao Cong, Cheng Long, H. M. Kiah, Bin Cui","doi":"10.14778/3598581.3598589","DOIUrl":"https://doi.org/10.14778/3598581.3598589","url":null,"abstract":"To index multi-dimensional data, space-filling curves (SFCs) have been used to map the data to one dimension, and then a one-dimensional indexing method such as the B-tree is used to index the mapped data. The existing SFCs all adopt a single mapping scheme for the whole data space. However, a single mapping scheme often does not perform well on all the data space. In this paper, we propose a new type of SFC called piecewise SFCs, which adopts different mapping schemes for different data subspaces. Specifically, we propose a data structure called Bit Merging tree (BMTree), which can generate data subspaces and their SFCs simultaneously and achieve desirable properties of the SFC for the whole data space. Furthermore, we develop a reinforcement learning based solution to build the BMTree, aiming to achieve excellent query performance. Extensive experiments show that our proposed method outperforms existing SFCs in terms of query performance.","PeriodicalId":20467,"journal":{"name":"Proc. VLDB Endow.","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79066417","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
What Modern NVMe Storage Can Do, And How To Exploit It: High-Performance I/O for High-Performance Storage Engines 现代NVMe存储可以做什么,以及如何利用它:高性能存储引擎的高性能I/O
Proc. VLDB Endow. Pub Date : 2023-05-01 DOI: 10.14778/3598581.3598584
Gabriel Haas, Viktor Leis
{"title":"What Modern NVMe Storage Can Do, And How To Exploit It: High-Performance I/O for High-Performance Storage Engines","authors":"Gabriel Haas, Viktor Leis","doi":"10.14778/3598581.3598584","DOIUrl":"https://doi.org/10.14778/3598581.3598584","url":null,"abstract":"NVMe SSDs based on flash are cheap and offer high throughput. Combining several of these devices into a single server enables 10 million I/O operations per second or more. Our experiments show that existing out-of-memory database systems and storage engines achieve only a fraction of this performance. In this work, we demonstrate that it is possible to close the performance gap between hardware and software through an I/O optimized storage engine design. In a heavy out-of-memory setting, where the dataset is 10 times larger than main memory, our system can achieve more than 1 million TPC-C transactions per second.","PeriodicalId":20467,"journal":{"name":"Proc. VLDB Endow.","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73983251","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Decoupled Graph Neural Networks for Large Dynamic Graphs 大型动态图解耦图神经网络
Proc. VLDB Endow. Pub Date : 2023-05-01 DOI: 10.48550/arXiv.2305.08273
Y. Zheng, Zhewei Wei, Jiajun Liu
{"title":"Decoupled Graph Neural Networks for Large Dynamic Graphs","authors":"Y. Zheng, Zhewei Wei, Jiajun Liu","doi":"10.48550/arXiv.2305.08273","DOIUrl":"https://doi.org/10.48550/arXiv.2305.08273","url":null,"abstract":"Real-world graphs, such as social networks, financial transactions, and recommendation systems, often demonstrate dynamic behavior. This phenomenon, known as graph stream, involves the dynamic changes of nodes and the emergence and disappearance of edges. To effectively capture both the structural and temporal aspects of these dynamic graphs, dynamic graph neural networks have been developed. However, existing methods are usually tailored to process either continuous-time or discrete-time dynamic graphs, and cannot be generalized from one to the other. In this paper, we propose a decoupled graph neural network for large dynamic graphs, including a unified dynamic propagation that supports efficient computation for both continuous and discrete dynamic graphs. Since graph structure-related computations are only performed during the propagation process, the prediction process for the downstream task can be trained separately without expensive graph computations, and therefore any sequence model can be plugged-in and used. As a result, our algorithm achieves exceptional scalability and expressiveness. We evaluate our algorithm on seven real-world datasets of both continuous-time and discrete-time dynamic graphs. The experimental results demonstrate that our algorithm achieves state-of-the-art performance in both kinds of dynamic graphs. Most notably, the scalability of our algorithm is well illustrated by its successful application to large graphs with up to over a billion temporal edges and over a hundred million nodes.","PeriodicalId":20467,"journal":{"name":"Proc. VLDB Endow.","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85071720","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MiniGraph: Querying Big Graphs with a Single Machine MiniGraph:用单个机器查询大图
Proc. VLDB Endow. Pub Date : 2023-05-01 DOI: 10.14778/3598581.3598590
Xiaoke Zhu, Yang Liu, Shuhao Liu, W. Fan
{"title":"MiniGraph: Querying Big Graphs with a Single Machine","authors":"Xiaoke Zhu, Yang Liu, Shuhao Liu, W. Fan","doi":"10.14778/3598581.3598590","DOIUrl":"https://doi.org/10.14778/3598581.3598590","url":null,"abstract":"This paper presents MiniGraph, an out-of-core system for querying big graphs with a single machine. As opposed to previous single-machine graph systems, MiniGraph proposes a pipelined architecture to overlap I/O and CPU operations, and improves multi-core parallelism. It also introduces a hybrid model to support both vertex-centric and graph-centric parallel computations, to simplify parallel graph programming, speed up beyond-neighborhood computations, and parallelize computations within each subgraph. The model induces a two-level parallel execution model to explore both inter-subgraph and intra-subgraph parallelism. Moreover, MiniGraph develops new optimization techniques under its architecture. Using real-life graphs of different types, we show that MiniGraph is up to 76.1x faster than prior out-of-core systems, and performs better than some multi-machine systems that use up to 12 machines.","PeriodicalId":20467,"journal":{"name":"Proc. VLDB Endow.","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72840031","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Temporal SIR-GN: Efficient and Effective Structural Representation Learning for Temporal Graphs 时间图SIR-GN:时间图的高效结构表示学习
Proc. VLDB Endow. Pub Date : 2023-05-01 DOI: 10.14778/3598581.3598583
Janet Layne, Justin Carpenter, Edoardo Serra, Francesco Gullo
{"title":"Temporal SIR-GN: Efficient and Effective Structural Representation Learning for Temporal Graphs","authors":"Janet Layne, Justin Carpenter, Edoardo Serra, Francesco Gullo","doi":"10.14778/3598581.3598583","DOIUrl":"https://doi.org/10.14778/3598581.3598583","url":null,"abstract":"Node representation learning (NRL) generates numerical vectors (embeddings) for the nodes of a graph. Structural NRL specifically assigns similar node embeddings for those nodes that exhibit similar structural roles. This is in contrast with its proximity-based counterpart, wherein similarity between embeddings reflects spatial proximity among nodes. Structural NRL is useful for tasks such as node classification where nodes of the same class share structural roles, though there may exist a distant, or no path between them.\u0000 Athough structural NRL has been well-studied in static graphs, it has received limited attention in the temporal setting. Here, the embeddings are required to represent the evolution of nodes' structural roles over time. The existing methods are limited in terms of efficiency and effectiveness: they scale poorly to even moderate number of timestamps, or capture structural role only tangentially.\u0000 \u0000 In this work, we present a novel unsupervised approach to structural representation learning for temporal graphs that overcomes these limitations. For each node, our approach clusters then aggregates the embedding of a node's neighbors for each timestamp, followed by a further temporal aggregation of all timestamps. This is repeated for (at most)\u0000 d\u0000 iterations, so as to acquire information from the\u0000 d\u0000 -hop neighborhood of a node. Our approach takes linear time in the number of overall temporal edges, and possesses important theoretical properties that formally demonstrate its effectiveness.\u0000 \u0000 Extensive experiments on synthetic and real datasets show superior performance in node classification and regression tasks, and superior scalability of our approach to large graphs.","PeriodicalId":20467,"journal":{"name":"Proc. VLDB Endow.","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74748120","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Extract-Transform-Load for Video Streams 提取-转换-加载视频流
Proc. VLDB Endow. Pub Date : 2023-05-01 DOI: 10.14778/3598581.3598600
Ferdinand Kossmann, Ziniu Wu, Eugenie Lai, Nesime Tatbul, Lei Cao, Tim Kraska, S. Madden
{"title":"Extract-Transform-Load for Video Streams","authors":"Ferdinand Kossmann, Ziniu Wu, Eugenie Lai, Nesime Tatbul, Lei Cao, Tim Kraska, S. Madden","doi":"10.14778/3598581.3598600","DOIUrl":"https://doi.org/10.14778/3598581.3598600","url":null,"abstract":"\u0000 Social media, self-driving cars, and traffic cameras produce video streams at large scales and cheap cost. However, storing and querying video at such scales is prohibitively expensive. We propose to treat large-scale video analytics as a data warehousing problem: Video is a format that is easy to produce but needs to be transformed into an application-specific format that is easy to query. Analogously, we define the problem of Video Extract-Transform-Load (\u0000 V-ETL\u0000 ).\u0000 V-ETL\u0000 systems need to reduce the cost of running a user-defined\u0000 V-ETL\u0000 job while also giving throughput guarantees to keep up with the rate at which data is produced. We find that no current system sufficiently fulfills both needs and therefore propose\u0000 Skyscraper\u0000 , a system tailored to\u0000 V-ETL. Skyscraper\u0000 can execute arbitrary video ingestion pipelines and adaptively tunes them to reduce cost at minimal or no quality degradation, e.g., by adjusting sampling rates and resolutions to the ingested content.\u0000 Skyscraper\u0000 can hereby be provisioned with cheap on-premises compute and uses a combination of buffering and cloud bursting to deal with peaks in workload caused by expensive processing configurations. In our experiments, we find that\u0000 Skyscraper\u0000 significantly reduces the cost of\u0000 V-ETL\u0000 ingestion compared to adaptions of current SOTA systems, while at the same time giving robustness guarantees that these systems are lacking.\u0000","PeriodicalId":20467,"journal":{"name":"Proc. VLDB Endow.","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77218679","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信