Processing of volumetric data by slice- and process-based streaming

A. Varchola, A. Vasko, Viliam Solcany, L. Dimitrov, M. Srámek
{"title":"Processing of volumetric data by slice- and process-based streaming","authors":"A. Varchola, A. Vasko, Viliam Solcany, L. Dimitrov, M. Srámek","doi":"10.1145/1294685.1294703","DOIUrl":null,"url":null,"abstract":"Although the main memory capacity of modern computers is constantly growing, the developers and users of data manipulation and visualization tools fight all over again with the problem of its shortage. In this paper, we advocate slice-based streaming as a possible solution for the memory shortage problem in the case of preprocessing and analysis of volumetric data defined over Cartesian, regular and other types of structured grids. In our version of streaming, data flows through independent processing units---filters---represented by individual system processes, which store each just a minimal fraction of the whole data set, with a slice as a basic data entity. Such filters can be easily interconnected in complex networks by means of standard interprocess communication using named pipes and are executed concurrently on a parallel system without a requirement of specific modification or explicit parallelization.\n In our technique, the amount of stored data by a filter is defined by the algorithm implemented therein, and is in most cases as small as one data slice or only several slices. Thus, the upper bound on the processed data volume is not any more defined by the main memory size but is shifted to the disc capacity, which is usually orders of magnitude larger. We propose implementations of this technique for various point, local and even global data processing operations, which may require multiple runs over the input data or eventually temporary data buffering. Further, we give a detailed performance analysis and show how well this approach fits to the current trend of employing cheap multicore processors and multiprocessor computers.","PeriodicalId":325699,"journal":{"name":"International Conference on Computer Graphics, Virtual Reality, Visualisation and Interaction in Africa","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2007-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"6","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Conference on Computer Graphics, Virtual Reality, Visualisation and Interaction in Africa","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/1294685.1294703","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 6

Abstract

Although the main memory capacity of modern computers is constantly growing, the developers and users of data manipulation and visualization tools fight all over again with the problem of its shortage. In this paper, we advocate slice-based streaming as a possible solution for the memory shortage problem in the case of preprocessing and analysis of volumetric data defined over Cartesian, regular and other types of structured grids. In our version of streaming, data flows through independent processing units---filters---represented by individual system processes, which store each just a minimal fraction of the whole data set, with a slice as a basic data entity. Such filters can be easily interconnected in complex networks by means of standard interprocess communication using named pipes and are executed concurrently on a parallel system without a requirement of specific modification or explicit parallelization. In our technique, the amount of stored data by a filter is defined by the algorithm implemented therein, and is in most cases as small as one data slice or only several slices. Thus, the upper bound on the processed data volume is not any more defined by the main memory size but is shifted to the disc capacity, which is usually orders of magnitude larger. We propose implementations of this technique for various point, local and even global data processing operations, which may require multiple runs over the input data or eventually temporary data buffering. Further, we give a detailed performance analysis and show how well this approach fits to the current trend of employing cheap multicore processors and multiprocessor computers.
通过基于切片和基于进程的流处理体积数据
尽管现代计算机的主存储器容量不断增长,但数据处理和可视化工具的开发人员和用户一直在为内存不足的问题而斗争。在本文中,我们提倡基于切片的流作为在笛卡尔,规则和其他类型的结构化网格上定义的体积数据的预处理和分析的情况下内存短缺问题的可能解决方案。在我们的流版本中,数据流经独立的处理单元——过滤器——由单个系统进程表示,每个系统进程只存储整个数据集的最小部分,其中一个片作为基本数据实体。这样的过滤器可以很容易地在复杂的网络中通过使用命名管道的标准进程间通信进行互连,并且可以在并行系统上并发执行,而不需要进行特定的修改或显式的并行化。在我们的技术中,过滤器存储的数据量由其中实现的算法定义,并且在大多数情况下只有一个数据片或只有几个数据片。因此,已处理数据量的上限不再由主存大小定义,而是转移到通常大几个数量级的磁盘容量上。我们建议将该技术用于各种点、局部甚至全局数据处理操作,这些操作可能需要对输入数据进行多次运行或最终进行临时数据缓冲。此外,我们给出了详细的性能分析,并展示了这种方法如何很好地适应当前使用廉价多核处理器和多处理器计算机的趋势。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信