A. Varchola, A. Vasko, Viliam Solcany, L. Dimitrov, M. Srámek
{"title":"Processing of volumetric data by slice- and process-based streaming","authors":"A. Varchola, A. Vasko, Viliam Solcany, L. Dimitrov, M. Srámek","doi":"10.1145/1294685.1294703","DOIUrl":null,"url":null,"abstract":"Although the main memory capacity of modern computers is constantly growing, the developers and users of data manipulation and visualization tools fight all over again with the problem of its shortage. In this paper, we advocate slice-based streaming as a possible solution for the memory shortage problem in the case of preprocessing and analysis of volumetric data defined over Cartesian, regular and other types of structured grids. In our version of streaming, data flows through independent processing units---filters---represented by individual system processes, which store each just a minimal fraction of the whole data set, with a slice as a basic data entity. Such filters can be easily interconnected in complex networks by means of standard interprocess communication using named pipes and are executed concurrently on a parallel system without a requirement of specific modification or explicit parallelization.\n In our technique, the amount of stored data by a filter is defined by the algorithm implemented therein, and is in most cases as small as one data slice or only several slices. Thus, the upper bound on the processed data volume is not any more defined by the main memory size but is shifted to the disc capacity, which is usually orders of magnitude larger. We propose implementations of this technique for various point, local and even global data processing operations, which may require multiple runs over the input data or eventually temporary data buffering. Further, we give a detailed performance analysis and show how well this approach fits to the current trend of employing cheap multicore processors and multiprocessor computers.","PeriodicalId":325699,"journal":{"name":"International Conference on Computer Graphics, Virtual Reality, Visualisation and Interaction in Africa","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2007-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"6","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Conference on Computer Graphics, Virtual Reality, Visualisation and Interaction in Africa","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/1294685.1294703","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 6
Abstract
Although the main memory capacity of modern computers is constantly growing, the developers and users of data manipulation and visualization tools fight all over again with the problem of its shortage. In this paper, we advocate slice-based streaming as a possible solution for the memory shortage problem in the case of preprocessing and analysis of volumetric data defined over Cartesian, regular and other types of structured grids. In our version of streaming, data flows through independent processing units---filters---represented by individual system processes, which store each just a minimal fraction of the whole data set, with a slice as a basic data entity. Such filters can be easily interconnected in complex networks by means of standard interprocess communication using named pipes and are executed concurrently on a parallel system without a requirement of specific modification or explicit parallelization.
In our technique, the amount of stored data by a filter is defined by the algorithm implemented therein, and is in most cases as small as one data slice or only several slices. Thus, the upper bound on the processed data volume is not any more defined by the main memory size but is shifted to the disc capacity, which is usually orders of magnitude larger. We propose implementations of this technique for various point, local and even global data processing operations, which may require multiple runs over the input data or eventually temporary data buffering. Further, we give a detailed performance analysis and show how well this approach fits to the current trend of employing cheap multicore processors and multiprocessor computers.