S. Bergner, Torsten Möller, M. S. Drew, G. Finlayson
{"title":"Interactive spectral volume rendering","authors":"S. Bergner, Torsten Möller, M. S. Drew, G. Finlayson","doi":"10.1109/VISUAL.2002.1183763","DOIUrl":"https://doi.org/10.1109/VISUAL.2002.1183763","url":null,"abstract":"We describe a method for volume rendering using a spectral representation of colour instead of the traditional RGB model. It is shown how to use this framework for a novel exploration of datasets through enhanced transfer function design. Furthermore, our framework is extended to allow real-time re-lighting of the scene created with any rendering method. The technique of post-illumination is introduced to generate new spectral images for arbitrary light colours in real-time. Also a tool is described to design a palette of lights and materials having certain properties such as selective metamerism or colour constancy. Applied to spectral transfer functions, different light colours can accentuate or hide specific qualities of the data. In connection with post-illumination this provides a new degree of freedom for guided exploration of volumetric data, which cannot be achieved using the RGB model.","PeriodicalId":196064,"journal":{"name":"IEEE Visualization, 2002. VIS 2002.","volume":"70 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131901717","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Out-of-core rendering of massive geometric environments","authors":"Gokul Varadhan, Dinesh Manocha","doi":"10.1109/VISUAL.2002.1183759","DOIUrl":"https://doi.org/10.1109/VISUAL.2002.1183759","url":null,"abstract":"We present an external memory algorithm for fast display of very large and complex geometric environments. We represent the model using a scene graph and employ different culling techniques for rendering acceleration. Our algorithm uses a parallel approach to render the scene as well as fetch objects from the disk in a synchronous manner. We present a novel prioritized prefetching technique that takes into account LOD-switching and visibility-based events between successive frames. We have applied our algorithm to large gigabyte-sized environments that are composed of thousands of objects and tens of millions of polygons. The memory overhead of our algorithm is output sensitive and is typically tens of megabytes. In practice, our approach scales with the model sizes, and its rendering performance is comparable to that of an in-core algorithm.","PeriodicalId":196064,"journal":{"name":"IEEE Visualization, 2002. VIS 2002.","volume":"64 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124583865","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Isometric embedding by surface reconstruction from distances","authors":"I. Hotz","doi":"10.1109/VISUAL.2002.1183782","DOIUrl":"https://doi.org/10.1109/VISUAL.2002.1183782","url":null,"abstract":"To display the intuitive meaning of an abstract metric it is helpful to look on an embedded surface with the same inner geometry as the given metric. The resulting partial differential equations have no standard solution. Only for some special cases satisfactory methods are known. I present a new algorithmic approach which is not based on differential equations. In contrast to other methods this technique also works if the embedding exists only locally. The fundamental idea is to estimate Euclidean distances, from which the surface is built up. In this paper I focus on the reconstruction of a surface from these estimated distances. Particular the influence of a perturbation of the distances on the shape of the resulting surface is investigated.","PeriodicalId":196064,"journal":{"name":"IEEE Visualization, 2002. VIS 2002.","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130932203","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Comparative evaluation of visualization and experimental results using image comparison metrics","authors":"Hualin Zhou, Min Chen, M. F. Webster","doi":"10.1109/VISUAL.2002.1183790","DOIUrl":"https://doi.org/10.1109/VISUAL.2002.1183790","url":null,"abstract":"Comparative evaluation of visualization and experimental results is a critical step in computational steering. In this paper, we present a study of image comparison metrics for quantifying the magnitude of difference between visualization of a computer simulation and a photographic image captured from an experiment. We examined eleven metrics, including three spatial domain, four spatial-frequency domain and four HVS (human-vision system) metrics. Among these metrics, a spatial-frequency domain metric called 2nd-order Fourier comparison was proposed specifically for this work. Our study consisted of two stages: base cases and field trials. The former is a general study on a controlled comparison space using purposely selected data, and the latter involves imagery results from computational fluid dynamics and a rheological experiment. This study has introduced a methodological framework for analyzing image-level methods used in comparative visualization. For the eleven metrics considered, it has offered a set of informative indicators as to the strengths and weaknesses of each metric. In particular, we have identified three image comparison metrics that are effective in separating \"similar\" and \"different\" image groups. Our 2nd-order Fourier comparison metric has compared favorably with others in two of the three tests, and has shown its potential to be used for steering computer simulation quantitatively.","PeriodicalId":196064,"journal":{"name":"IEEE Visualization, 2002. VIS 2002.","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130229277","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yasuko Suzuki, I. Fujishiro, Li Chen, Hiroko Nakamura
{"title":"Case study: hardware-accelerated selective LIC volume rendering","authors":"Yasuko Suzuki, I. Fujishiro, Li Chen, Hiroko Nakamura","doi":"10.1109/VISUAL.2002.1183811","DOIUrl":"https://doi.org/10.1109/VISUAL.2002.1183811","url":null,"abstract":"Line Integral Convolution (LIC) is a promising method for visualizing 2D dense flow fields. Direct extensions of the LIC method to 3D have not been considered very effective, because optical integration in viewing directions tends to spoil the coherent structures along 3D local streamlines. In our previous reports, we have proposed a selective approach to volume rendering of LIC solid texture using 3D significance map (S-map), derived from the characteristics of flow structures, and a specific illumination model for 3D streamlines. In this paper, we take full advantage of scalar volume rendering hardware, such as VolumePro, to realize a realtime 3D flow field visualization environment with the LIC volume rendering method.","PeriodicalId":196064,"journal":{"name":"IEEE Visualization, 2002. VIS 2002.","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121352068","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Volume clipping via per-fragment operations in texture-based volume visualization","authors":"D. Weiskopf, Klaus Engel, T. Ertl","doi":"10.1109/VISUAL.2002.1183762","DOIUrl":"https://doi.org/10.1109/VISUAL.2002.1183762","url":null,"abstract":"We propose new clipping methods that are capable of using complex geometries for volume clipping. The clipping tests exploit per-fragment operations on the graphics hardware to achieve high frame rates. In combination with texture-based volume rendering, these techniques enable the user to interactively select and explore regions of the data set. We present depth-based clipping techniques that analyze the depth structure of the boundary representation of the clip geometry to decide which parts of the volume have to be clipped. In another approach, a voxelized clip object is used to identify the clipped regions.","PeriodicalId":196064,"journal":{"name":"IEEE Visualization, 2002. VIS 2002.","volume":"132 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115900085","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"PMR: point to mesh rendering, a feature-based approach","authors":"T. Dey, James Hudson","doi":"10.1109/VISUAL.2002.1183770","DOIUrl":"https://doi.org/10.1109/VISUAL.2002.1183770","url":null,"abstract":"Within the field of computer graphics and visualization, it is often necessary to visualize polygonal models with large number of polygons. Display quality is mandatory, but it is also desirable to have the ability to rapidly update the display in order to facilitate interactive use. Point based rendering methods have been shown effective for this task. Building on this paradigm we introduce the PMR system which uses a hierarchy both in points and triangles for rendering. This hierarchy is fundamentally different from the ones used in existing methods. It is based on the feature geometry in the object space rather than its projection in the screen space. This provides certain advantages over the existing methods.","PeriodicalId":196064,"journal":{"name":"IEEE Visualization, 2002. VIS 2002.","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126383582","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Horizon occlusion culling for real-time rendering of hierarchical terrains","authors":"Brandon Lloyd, P. Egbert","doi":"10.1109/VISUAL.2002.1183801","DOIUrl":"https://doi.org/10.1109/VISUAL.2002.1183801","url":null,"abstract":"We present a technique to perform occlusion culling for hierarchical terrains at run-time. The algorithm is simple to implement and requires minimal pre-processing and additional storage, yet leads to 2-4 times improvement in framerate for views with high degrees of occlusion. Our method is based on the well-known occlusion horizon algorithm. We show how to adapt the algorithm for use with hierarchical terrains. The occlusion horizon is constructed as the terrain is traversed in an approximate front to back ordering. Regions of the terrain are compared to the horizon to determine when they are completely occluded from the viewpoint. Culling these regions leads to significant savings in rendering.","PeriodicalId":196064,"journal":{"name":"IEEE Visualization, 2002. VIS 2002.","volume":"245 ","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"113995305","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"BLIC: Bi-Level Isosurface Compression","authors":"G. Taubin","doi":"10.1109/VISUAL.2002.1183807","DOIUrl":"https://doi.org/10.1109/VISUAL.2002.1183807","url":null,"abstract":"In this paper we introduce a new and simple algorithm to compress isosurface data. This is the data extracted by isosurface algorithms from scalar functions defined on volume grids, and used to generate polygon meshes or alternative representations. In this algorithm the mesh connectivity and a substantial proportion of the geometric information are encoded to a fraction of a bit per marching cubes vertex with a context based arithmetic coder closely related to the JBIG binary image compression standard. The remaining optional geometric information that specifies the location of each marching cubes vertex more precisely along its supporting intersecting grid edge, is efficiently encoded in scan-order with the same mechanism. Vertex normals can optionally be computed as normalized gradient vectors by the encoder and included in the bitstream after quantization and entropy encoding, or computed by the decoder in a postprocessing smoothing step. These choices are determined by trade-offs associated with an in-core vs. out-of-core decoder structure. The main features of our algorithm are its extreme simplicity and high compression rates.","PeriodicalId":196064,"journal":{"name":"IEEE Visualization, 2002. VIS 2002.","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114225319","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Kinetic visualization: a technique for illustrating 3D shape and structure","authors":"E. Lum, Aleksander Stompel, K. Ma","doi":"10.1109/VISUAL.2002.1183805","DOIUrl":"https://doi.org/10.1109/VISUAL.2002.1183805","url":null,"abstract":"Motion provides strong visual cues for the perception of shape and depth, as demonstrated by cognitive scientists and visual artists. This paper presents a novel visualization technique-kinetic visualization -that uses particle systems to add supplemental motion cues which can aid in the perception of shape and spatial relationships of static objects. Based on a set of rules following perceptual and physical principles, particles flowing over the surface of an object not only bring out, but also attract attention to, essential information on the shape of the object that might not be readily visible with conventional rendering that uses lighting and view changes. Replacing still images with animations in this fashion, we demonstrate with both surface and volumetric models in the accompanying videos that in many cases the resulting visualizations effectively enhance the perception of three-dimensional shape and structure. The results of a preliminary user study that we have conducted also show evidence that the supplemental motion cues help.","PeriodicalId":196064,"journal":{"name":"IEEE Visualization, 2002. VIS 2002.","volume":"87 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114483282","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}