{"title":"End-to-end data reduction and hardware accelerated rendering techniques for visualizing time-varying non-uniform grid volume data","authors":"H. Akiba, K. Ma, J. Clyne","doi":"10.2312/VG/VG05/031-039","DOIUrl":"https://doi.org/10.2312/VG/VG05/031-039","url":null,"abstract":"We present a systematic approach for direct volume rendering terascale-sized data that are time-varying, and possibly non-uniformly sampled, using only a single commodity graphics PC. Our method employs a data reduction scheme that combines lossless, wavelet-based progressive data access with a user-directed, hardware-accelerated data packing technique. Data packing is achieved by discarding data blocks with values outside the data interval of interest and encoding the remaining data in a structure that can be efficiently decoded in the GPU. The compressed data can be transferred between disk, main memory, and video memory more efficiently, leading to more effective data exploration in both spatial and temporal domains. Furthermore, our texture-map based volume rendering system is capable of correctly displaying data that are sampled on a stretched, Cartesian grid. To study the effectiveness of our technique we used data sets generated from a large solar convection simulation, computed on a non-uniform, 504/spl times/504/spl times/2048 grid.","PeriodicalId":443333,"journal":{"name":"Fourth International Workshop on Volume Graphics, 2005.","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2005-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115701530","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Combining point clouds and volume objects in volume scene graphs","authors":"Min Chen","doi":"10.2312/VG/VG05/127-135","DOIUrl":"https://doi.org/10.2312/VG/VG05/127-135","url":null,"abstract":"This paper describes an extension to the technical framework of constructive volume geometry (CVG) in order to accommodate point clouds in volume scene graphs. It introduces the notion of point-based volume object (PBVO) that is characterized by the opacity, rather than the geometry, of a point cloud. It examines and compares several radial basis functions (RBFs), including the one proposed in this paper, for constructing scalar fields from point clouds. It applies basic CVG operators to PBVOs and demonstrates the inter-operability of PBVOs with conventional volume objects including those procedurally defined and those constructed from volume datasets. It presents an octree-based algorithm for reducing the complexity in rendering a PBVO with a large number of points, and a set of testing results showing a significant speedup when an octree is deployed for rendering PBVOs.","PeriodicalId":443333,"journal":{"name":"Fourth International Workshop on Volume Graphics, 2005.","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2005-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124686910","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Visualization of time-varying volumetric data using differential time-histogram table","authors":"Hamid Younesy, Torsten Möller, H. Carr","doi":"10.2312/VG/VG05/021-029","DOIUrl":"https://doi.org/10.2312/VG/VG05/021-029","url":null,"abstract":"We introduce a novel data structure called differential time-histogram table (DTHT) for visualization of time-varying scalar data. This data structure only stores voxels that are changing between time-steps or during transfer function updates. It allows efficient updates of data necessary for rendering during a sequence of queries common during data exploration and visualization. The table is used to update the values held in memory so that efficient visualization is supported while guaranteeing that the scalar field visualized is within a given error tolerance of the scalar field sampled. Our data structure allows updates of time-steps in the order of tens of frames per second for volumes of sizes of 4.5GB, enabling real-time time-sliders.","PeriodicalId":443333,"journal":{"name":"Fourth International Workshop on Volume Graphics, 2005.","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2005-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123350121","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Modeling Volume Textures - Keynote","authors":"H. Rushmeier","doi":"10.1109/VG.2005.194090","DOIUrl":"https://doi.org/10.1109/VG.2005.194090","url":null,"abstract":"Since their introduction in semina1 papers by Peachey and Perlin in 1985, volume textures have been a popular modeling tool. Not only are they useful for creating complex and consistent textures, they are needed for rendering weathering effects on various materials such as stone, as demonstrated by Dorsey et al. in 1999. Wile it is possible to generate volume textures purely procedurally, recent interest in 2D texture synthesis from example has spurred interest in synthesizing volumetric textures from physical example. I will present applications where volume textures from samples are of interest, some approaches to estimating them, and some early work on evaluating whether the synthesized volumes are correct. Short Biography Holly Rushmeier is a professor of computer science at Yale University. Her current research focuses on scanning and modeling of shape and appearance properties, and on applications in cultural heritage. She teaches courses in computer graphics and visualization at both the graduate and undergraduate levels. She received the BS, MS and PhD degrees in Mechanical Engineering from Cornel1 University in 1977, 1986 and 1988 respectively. Between receiving the BS and returning to graduate school in 1983 she worked as an engineer at the Boeing Commercial Airplane Company and at Washington Natural Gas Company, After receiving the PhD she held positions at Georgia Tech, the National Institute of Standards and Technology and the IBM T.J. Watson Research Center prior to joining the Yale faculty in 2004. She has worked on a number of different problems in rendering, including global illumination and tone reproduction. At NIST and IBM she worked on a variety of data visualization problems in areas ranging from engineering to finance. Most recently her work was primarily in the area of acquisition of data required for generating realistic computer graphics models, including a project to create a digital model of Michelangelo’s Florence Pieta and models of Egyptian cultural artifacts in a joint project between IBM and the Government of Egypt. Dr. Rushmeier was Editor-in-Chief of ACM Transactions on Graphics from 1996-99, and is currently a member of the ACM Publications Board. She has also served on the editorial board of IEEE Transactions on Visualization .and Computer Graphics, She is currently on the editorial boards of IEEE Computer Graphics and Applications, Computer Graphics Forum and ACM Transactions on Applied Perception. In 1996 she served as the papers chair for the ACM SIGGRAPH conference, in 1998 and 2004 as the papers co-chair for the IEEE Visualization conference and in 2000 as the papers co-chair for the Eurographics Rendering Workshop. In 2005 she is serving as co-chair for the IEEE Visualization and 3DIM conferences. She has also served in numerous program committees including multiple years on the committees for SIGGRAPH, IEEE Visualization, Eurographics, Eurographics Rendering Workshop, and Graphics Interface.","PeriodicalId":443333,"journal":{"name":"Fourth International Workshop on Volume Graphics, 2005.","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2005-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121442307","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Representation of objects with sharp details in truncated distance fields","authors":"P. Novotný, M. Srámek","doi":"10.2312/VG/VG05/109-116","DOIUrl":"https://doi.org/10.2312/VG/VG05/109-116","url":null,"abstract":"We present a new approach for voxelization of implicit solids which contain sharp details. If such objects are processed by common techniques, voxelization artifacts may appear, resulting, among others, in jaggy edges in rendered images. To cope with this problem we proposed a technique called sharp details correction. The main idea is to modify objects during the process of voxelization according to the representability criterion. This means that sharp edges end vertices are rounded to a curvature, which depends on the grid resolution. Thus, we obtain artifact-free voxelized solids which produce alias-free images.","PeriodicalId":443333,"journal":{"name":"Fourth International Workshop on Volume Graphics, 2005.","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2005-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134465593","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Volume rendering for high dynamic range displays","authors":"A. Ghosh, Matthew Trentacoste, W. Heidrich","doi":"10.2312/VG/VG05/091-098","DOIUrl":"https://doi.org/10.2312/VG/VG05/091-098","url":null,"abstract":"Dynamic range restrictions of conventional displays limit the amount of detail that can be represented in volume rendering applications. However, high dynamic range displays with contrast ratios larger than 50,000 : 1 have recently been developed. We explore how these increased capabilities can be exploited for common volume rendering algorithms such as direct volume rendering and maximum projection rendering. In particular, we discuss distribution of intensities across the range of the display contrast and a mapping of the transfer function to a perceptually linear space over the range of intensities that the display can produce. This allows us to reserve several just noticeable difference steps of intensities for spatial context apart from clearly depicting the main regions of interest. We also propose generating automatic transfer functions for order independent operators through histogram-equalization of data in perceptually linear space.","PeriodicalId":443333,"journal":{"name":"Fourth International Workshop on Volume Graphics, 2005.","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2005-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"113959350","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An evaluation of using real-time volumetric display of 3D ultrasound data for intracardiac catheter manipulation tasks","authors":"Aaron S. Wang, G. Narayan, D. Kao, D. Liang","doi":"10.2312/VG/VG05/041-045","DOIUrl":"https://doi.org/10.2312/VG/VG05/041-045","url":null,"abstract":"The enthusiasm for novel, minimally invasive, catheter based intracardiac procedures has highlighted the need to provide accurate, realtime, anatomically based image guidance to decrease complications, improve precision, and decrease fluoroscopy time. The recent development of realtime 3D echocardiography creates the opportunity to greatly improve our ability to guide minimally invasive procedures (Ahmad, 2003). However, the need to present 3D data on a 2D display decreases the utility of 3D echocardiography because echocardiographers cannot readily appreciate 3D perspective on a 2D display without ongoing image manipulation. We evaluated the use of a novel strategy of presenting the data in a true 3D volumetric display, Perspecta Spatial 3D System (Actuality Systems, Inc., Burlington, MA). Two experienced echocardiographers performed the task of identifying the targeted location of a catheter within 6 different phantoms using four display methods. Echocardiographic images were obtained with a SONOS 7500 (Philips Medical Systems, Inc., Andover, MA). Completion of the task was significantly faster with the Perspecta display with no loss in accuracy. Echocardiography in 3D significantly improves the ability of echocardiography for guidance of catheter based procedures. Further improvement is achieved by using a true 3D volumetric display, which allows for more intuitive assessment of the spatial relationships of catheters in three-dimensional space compared with conventional 2D visualization modalities.","PeriodicalId":443333,"journal":{"name":"Fourth International Workshop on Volume Graphics, 2005.","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2005-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127172355","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"iSBVR: isosurface-AIDED hardware acceleration techniques for slice-based volume rendering","authors":"Daqing Xue, Caixia Zhang, R. Crawfis","doi":"10.2312/VG/VG05/207-215","DOIUrl":"https://doi.org/10.2312/VG/VG05/207-215","url":null,"abstract":"In this paper, we examine the performance of the early z-culling feature on current high-end commodity graphics cards and present an isosurface-aided hardware acceleration algorithm for slice-based volume rendering (iSBVR) to maximize its utilization. We analyze the computational models for early z-culling of the texture based volume rendering. We demonstrate that the performance improves with two to four times speedup against an original straightforward SBVR on an ATI 9800pro display board. As volumetric shaders become increasingly complex, the advantages of fast z-culling will become even more pronounced.","PeriodicalId":443333,"journal":{"name":"Fourth International Workshop on Volume Graphics, 2005.","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2005-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121768577","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A simple and flexible volume rendering framework for graphics-hardware-based raycasting","authors":"S. Stegmaier, M. Strengert, T. Klein, T. Ertl","doi":"10.2312/VG/VG05/187-195","DOIUrl":"https://doi.org/10.2312/VG/VG05/187-195","url":null,"abstract":"In this work we present a flexible framework for GPU-based volume rendering. The framework is based on a single pass volume raycasting approach and is easily extensible in terms of new shader functionality. We demonstrate the flexibility of our system by means of a number of high-quality standard and nonstandard volume rendering techniques. Our implementation shows a promising performance in a number of benchmarks while producing images of higher accuracy than obtained by standard pre-integrated slice-based volume rendering.","PeriodicalId":443333,"journal":{"name":"Fourth International Workshop on Volume Graphics, 2005.","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2005-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122163332","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
D. Uesu, Louis Bavoil, S. Fleishman, J. Shepherd, Cláudio T. Silva
{"title":"Simplification of unstructured tetrahedral meshes by point sampling","authors":"D. Uesu, Louis Bavoil, S. Fleishman, J. Shepherd, Cláudio T. Silva","doi":"10.2312/VG/VG05/157-165","DOIUrl":"https://doi.org/10.2312/VG/VG05/157-165","url":null,"abstract":"Tetrahedral meshes are widely used in scientific computing for representing 3D scalar, vector, and tensor fields. The size and complexity of some of these meshes can limit the performance of many visualization algorithms, making it hard to achieve interactive visualization. The use of simplified models is one way to enable the real-time exploration of these datasets. In this paper, we propose a novel technique for simplifying large unstructured meshes. Most current techniques simplify the geometry of the mesh using edge collapses. Our technique simplifies an underlying scalar field directly by segmenting the original scalar field into two pieces: the boundary of the original domain and the interior samples of the scalar field. We then simplify each piece separately, taking into account proper error bounds. Finally, we combine the simplified domain boundary and scalar field into a complete, simplified mesh that can be visualized with standard unstructured-data visualization tools. Our technique is much faster than edge-collapse-based simplification approaches. Furthermore, it is particularly suitable for aggressive simplification. Experiments show that isosurfaces and volume renderings of meshes produced by our technique have few noticeable visual artifacts.","PeriodicalId":443333,"journal":{"name":"Fourth International Workshop on Volume Graphics, 2005.","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2005-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132016700","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}