{"title":"Vicinity shading for enhanced perception of volumetric data","authors":"A. J. Stewart","doi":"10.1109/VISUAL.2003.1250394","DOIUrl":"https://doi.org/10.1109/VISUAL.2003.1250394","url":null,"abstract":"This paper presents a shading model for volumetric data which enhances the perception of surfaces within the volume. The model incorporates uniform diffuse illumination, which arrives equally from all directions at each surface point in the volume. This illumination is attenuated by occlusions in the local vicinity of the surface point, resulting in shadows in depressions and crevices. Experiments by other authors have shown that perception of a surface is superior under uniform diffuse lighting, compared to illumination from point source lighting.","PeriodicalId":372131,"journal":{"name":"IEEE Visualization, 2003. VIS 2003.","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125636743","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Quasi-static approach approximation for 6 degrees-of-freedom haptic rendering","authors":"M. Wan, William A. McNeely","doi":"10.1109/VISUAL.2003.1250380","DOIUrl":"https://doi.org/10.1109/VISUAL.2003.1250380","url":null,"abstract":"In this paper, we propose a quasi-static approximation (QSA) approach to simulate the movement of the movable object in 6-degrees-of-freedom (DOF) haptic rendering. In our QSA approach, we solve for static equilibrium during each haptic time step, ignoring any dynamical properties such as inertia. The major contribution of this approach is to overcome the computational instability problem in overly stiff systems arising from numerical integration of second-order differential equations in previous dynamic models. Our primary experimental results on both simulated aircraft geometry and a large-scale real-world aircraft engine showed that our QSA approach was capable of maintaining the 1000Hz haptic refresh rate with more robust collision avoidance and more reliable force and torque feedback.","PeriodicalId":372131,"journal":{"name":"IEEE Visualization, 2003. VIS 2003.","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127731156","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Monte Carlo volume rendering","authors":"B. Csébfalvi, László Szirmay-Kalos","doi":"10.1109/VIS.2003.10000","DOIUrl":"https://doi.org/10.1109/VIS.2003.10000","url":null,"abstract":"In this paper a novel volume-rendering technique based on Monte Carlo integration is presented. As a result of a preprocessing, a point cloud of random samples is generated using a normalized continuous reconstruction of the volume as a probability density function. This point cloud is projected onto the image plane, and to each pixel an intensity value is assigned which is proportional to the number of samples projected onto the corresponding pixel area. In such a way a simulated X-ray image of the volume can be obtained. Theoretically, for a fixed image resolution, there exists an M number of samples such that the average standard deviation of the estimated pixel intensities us under the level of quantization error regardless of the number of voxels. Therefore Monte Carlo Volume Rendering (MCVR) is mainly proposed to efficiently visualize large volume data sets. Furthermore, network applications are also supported, since the trade-off between image quality and interactivity can be adapted to the bandwidth of the client/server connection by using progressive refinement.","PeriodicalId":372131,"journal":{"name":"IEEE Visualization, 2003. VIS 2003.","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134052867","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Extraction of topologically simple isosurfaces from volume datasets","authors":"A. Szymczak, James Vanderhyde","doi":"10.1109/VISUAL.2003.1250356","DOIUrl":"https://doi.org/10.1109/VISUAL.2003.1250356","url":null,"abstract":"There are numerous algorithms in graphics and visualization whose performance is known to decay as the topological complexity of the input increases. On the other hand, the standard pipeline for 3D geometry acquisition often produces 3D models that are topologically more complex than their real forms. We present a simple and efficient algorithm that allows us to simplify the topology of an isosurface by alternating the values of some number of voxels. Its utility and performance are demonstrated on several examples, including signed distance functions from polygonal models and CT scans.","PeriodicalId":372131,"journal":{"name":"IEEE Visualization, 2003. VIS 2003.","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134406835","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Hardware-based ray casting for tetrahedral meshes","authors":"Manfred Weiler, M. Kraus, Markus Merz, T. Ertl","doi":"10.1109/VISUAL.2003.1250390","DOIUrl":"https://doi.org/10.1109/VISUAL.2003.1250390","url":null,"abstract":"We present the first implementation of a volume ray casting algorithm for tetrahedral meshes running on off-the-shelf programmable graphics hardware. Our implementation avoids the memory transfer bottleneck of the graphics bus since the complete mesh data is stored in the local memory of the graphics adapter and all computations, in particular ray traversal and ray integration, are performed by the graphics processing unit. Analogously to other ray casting algorithms, our algorithm does not require an expensive cell sorting. Provided that the graphics adapter offers enough texture memory, our implementation performs comparable to the fastest published volume rendering algorithms for unstructured meshes. Our approach works with cyclic and/or non-convex meshes and supports early ray termination. Accurate ray integration is guaranteed by applying pre-integrated volume rendering. In order to achieve almost interactive modifications of transfer functions, we propose a new method for computing three-dimensional pre-integration tables.","PeriodicalId":372131,"journal":{"name":"IEEE Visualization, 2003. VIS 2003.","volume":"37 4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123330312","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Robert Albert, Noah Syroid, Yinqi Zhang, J. Agutter, F. Drews, D. Strayer, George Hutchinson, D. Westenskow
{"title":"Psychophysical scaling of a cardiovascular information display","authors":"Robert Albert, Noah Syroid, Yinqi Zhang, J. Agutter, F. Drews, D. Strayer, George Hutchinson, D. Westenskow","doi":"10.1109/VISUAL.2003.1250352","DOIUrl":"https://doi.org/10.1109/VISUAL.2003.1250352","url":null,"abstract":"A new method was developed to increase the saliency of changing variables in a cardiovascular visualization for use by anesthesiologists in the operating room (OR). Clinically meaningful changes in patient physiology were identified and then mapped to the inherent psychophysical properties of the visualization. A long history of psychophysical research has provided an understanding of the parameters within which the human information processing system is able to detect changes in the size, shape and color of visual objects (Gescheider, 1976, Spence, 1990, and Baird, 1970). These detection thresholds are known as just noticeable differences (JNDs) which characterize the amount of change in an object's attribute that is recognizable 50% of the time. A prototype version of the display has been demonstrated to facilitate anesthesiologist's performance while reducing cognitive workload during simulated cardiac events (Agutter et al., 2002). In order to further improve the utility of the new cardiovascular visualization, the clinically relevant changes in cardiovascular variables are mapped to noticeable perceptual changes in the representational elements of the display. The results of the method described in this paper are used to merge information from the psychophysical properties of the cardiovascular visualization, with clinically relevant changes in the patient's cardiovascular physiology as measured by the clinical meaningfulness questionnaire. The result of this combination will create a visualization that is sensitive to changes in the cardiovascular health of the patient and communicates this information to the user in a meaningful, salient and intuitive manner.","PeriodicalId":372131,"journal":{"name":"IEEE Visualization, 2003. VIS 2003.","volume":"186 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121882821","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. Kanitsar, R. Wegenkittl, D. Fleischmann, E. Gröller
{"title":"Advanced curved planar reformation: flattening of vascular structures","authors":"A. Kanitsar, R. Wegenkittl, D. Fleischmann, E. Gröller","doi":"10.1109/VISUAL.2003.1250353","DOIUrl":"https://doi.org/10.1109/VISUAL.2003.1250353","url":null,"abstract":"Traditional volume visualization techniques may provide incomplete clinical information needed for applications in medical visualization. In the area of vascular visualization important features such as the lumen of a diseased vessel segment may not be visible. Curved planar reformation (CPR) has proven to be an acceptable practical solution. Existing CPR techniques, however, still have diagnostically relevant limitations. In this paper, we introduce two advances methods for efficient vessel visualization, based on the concept of CPR. Both methods benefit from relaxation of spatial coherence in favor of improved feature perception. We present a new technique to visualize the interior of a vessel in a single image. A vessel is resampled along a spiral around its central axis. The helical spiral depicts the vessel volume. Furthermore, a method to display an entire vascular tree without mutually occluding vessels is presented. Minimal rotations at the bifurcations avoid occlusions. For each viewing direction the entire vessel structure is visible.","PeriodicalId":372131,"journal":{"name":"IEEE Visualization, 2003. VIS 2003.","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123973763","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Interactive deformation and visualization of level set surfaces using graphics hardware","authors":"A. Lefohn, J. Kniss, C. Hansen, R. Whitaker","doi":"10.1109/VISUAL.2003.1250357","DOIUrl":"https://doi.org/10.1109/VISUAL.2003.1250357","url":null,"abstract":"Deformable isosurfaces, implemented with level-set methods, have demonstrated a great potential in visualization for applications such as segmentation, surface processing, and surface reconstruction. Their usefulness has been limited, however, by their high computational cost and reliance on significant parameter tuning. This paper presents a solution to these challenges by describing graphics processor (GPU) based on algorithms for solving and visualizing level-set solutions at interactive rates. Our efficient GPU-based solution relies on packing the level-set isosurface data into a dynamic, sparse texture format. As the level set moves, this sparse data structure is updated via a novel GPU to CPU message passing scheme. When the level-set solver is integrated with a real-time volume renderer operating on the same packed format, a user can visualize and steer the deformable level-set surface as it evolves. In addition, the resulting isosurface can serve as a region-of-interest specifier for the volume renderer. This paper demonstrates the capabilities of this technology for interactive volume visualization and segmentation.","PeriodicalId":372131,"journal":{"name":"IEEE Visualization, 2003. VIS 2003.","volume":"68 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126320392","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A frequency-sensitive point hierarchy for images and volumes","authors":"Tom Welsh, K. Mueller","doi":"10.1109/VISUAL.2003.1250403","DOIUrl":"https://doi.org/10.1109/VISUAL.2003.1250403","url":null,"abstract":"This paper introduces a method for converting an image or volume sampled on a regular grid into a space-efficient irregular point hierarchy. The conversion process retains the original frequency characteristics of the dataset by matching the spatial distribution of sample points with the required frequency. To achieve good blending, the spherical points commonly used in volume rendering are generalized to ellipsoidal point primitives. A family of multiresolution, oriented Gabor wavelets provide the frequency-space analysis of the dataset. The outcome of this frequency analysis is the reduced set of points, in which the sampling rate is decreased in originally oversampled areas. During rendering, the traversal of the hierarchy can be controlled by any suitable error metric or quality criteria. The local level of refinement is also sensitive to the transfer function. Areas with density ranges mapped to high transfer function variability are rendered at higher point resolution than others. Our decomposition is flexible and can be used for iso-surface rendering, alpha compositing and X-ray rendering of volumes. We demonstrate our hierarchy with an interactive splatting volume renderer, in which the traversal of the point hierarchy for rendering is modulated by a user-specified frame rate.","PeriodicalId":372131,"journal":{"name":"IEEE Visualization, 2003. VIS 2003.","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124639963","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A texture-based framework for spacetime-coherent visualization of time-dependent vector fields","authors":"D. Weiskopf, G. Erlebacher, T. Ertl","doi":"10.1109/VISUAL.2003.1250361","DOIUrl":"https://doi.org/10.1109/VISUAL.2003.1250361","url":null,"abstract":"We propose unsteady flow advection-convolution (UFAC) as a novel visualization approach for unsteady flows. It performs time evolution governed by pathlines, but builds spatial correlation according to instantaneous streamlines whose spatial extent is controlled by the flow unsteadiness. UFAC is derived from a generic framework that provides spacetime-coherent dense representations of time dependent-vector fields by a two-step process: 1) construction of continuous trajectories in spacetime for temporal coherence; and 2) convolution along another set of paths through the above spacetime for spatially correlated patterns. Within the framework, known visualization techniques-such as Lagrangian-Eulerian advection, image-based flow visualization, unsteady flow LIC, and dynamic LIC-can be reproduced, often with better image quality, higher performance, or increased flexibility of the visualization style. Finally, we present a texture-based discretization of the framework and its interactive implementation on graphics hardware, which allows the user to gradually balance visualization speed against quality.","PeriodicalId":372131,"journal":{"name":"IEEE Visualization, 2003. VIS 2003.","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134456434","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}