IEEE Symposium on Volume Visualization (Cat. No.989EX300)最新文献

筛选
英文 中文
A real-time volume rendering architecture using an adaptive resampling scheme for parallel and perspective projections 使用自适应重采样方案的实时体绘制架构,用于并行和透视投影
IEEE Symposium on Volume Visualization (Cat. No.989EX300) Pub Date : 1998-10-01 DOI: 10.1145/288126.288146
M. Ogata, T. Ohkami, H. Lauer, H. Pfister
{"title":"A real-time volume rendering architecture using an adaptive resampling scheme for parallel and perspective projections","authors":"M. Ogata, T. Ohkami, H. Lauer, H. Pfister","doi":"10.1145/288126.288146","DOIUrl":"https://doi.org/10.1145/288126.288146","url":null,"abstract":"The paper describes an object order real time volume rendering architecture using an adaptive resampling scheme to perform resampling operations in a unified parallel pipeline manner for both parallel and perspective projections. Unlike parallel projections, perspective projections require a variable resampling structure due to diverging perspective rays. In order to address this issue, we propose an adaptive pipelined convolution block for resampling operations using the level of resolution to keep the parallel pipeline structure regular. We also propose to use multi resolution datasets prepared for different levels of grid resolution to bound the convolution operations. The proposed convolution block is organized using a systolic array structure, which works well with a distributed skewed memory for conflict free accesses of voxels. We present the results of some experiments with our software simulators of the proposed architecture and discuss important technical issues.","PeriodicalId":167141,"journal":{"name":"IEEE Symposium on Volume Visualization (Cat. No.989EX300)","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124258086","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
Coloring voxel-based objects for virtual endoscopy 用于虚拟内窥镜的基于体素的对象着色
IEEE Symposium on Volume Visualization (Cat. No.989EX300) Pub Date : 1998-10-01 DOI: 10.1145/288126.288140
Omer Shibolet, D. Cohen-Or
{"title":"Coloring voxel-based objects for virtual endoscopy","authors":"Omer Shibolet, D. Cohen-Or","doi":"10.1145/288126.288140","DOIUrl":"https://doi.org/10.1145/288126.288140","url":null,"abstract":"The paper describes a method for coloring voxel based models. The method generalizes the two-part texture mapping technique to color non convex objects in a more natural way. The method was developed for coloring internal cavities for the application of virtual endoscopy, where the surfaces are shaped like a general cylinder in the macro level, but with folds and bumps in the more detailed levels. Given a flat texture, the coloring method defines a mapping between the 3D surface and the texture which reflects the tensions of the points on the surface. The core of the method is a technique for mapping such non convex surfaces to convex ones. The new technique is based on a discrete dilation process that is fast and robust, and bypasses many of the numerical problems common to previous methods.","PeriodicalId":167141,"journal":{"name":"IEEE Symposium on Volume Visualization (Cat. No.989EX300)","volume":"88 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122327597","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 18
Opacity-weighted color interpolation for volume sampling 体积采样的不透明度加权颜色插值
IEEE Symposium on Volume Visualization (Cat. No.989EX300) Pub Date : 1998-10-01 DOI: 10.1145/288126.288186
C. Wittenbrink, T. Malzbender, Michael E. Goss
{"title":"Opacity-weighted color interpolation for volume sampling","authors":"C. Wittenbrink, T. Malzbender, Michael E. Goss","doi":"10.1145/288126.288186","DOIUrl":"https://doi.org/10.1145/288126.288186","url":null,"abstract":"Volume rendering creates images from sampled volumetric data. The compute intensive nature of volume rendering has driven research in algorithm optimization. An important speed optimization is the use of preclassification and preshading. The authors demonstrate an artifact that results when interpolating from preclassified or preshaded colors and opacity values separately. This method is flawed, leading to visible artifacts. They present an improved technique, opacity-weighted color interpolation, evaluate the RMS error improvement, hardware and algorithm efficiency, and demonstrated improvements. They show analytically that opacity-weighted color interpolation exactly reproduces material based interpolation results for certain volume classifiers, with the efficiencies of preclassification. The proposed technique may also have broad impact on opacity-texture-mapped polygon rendering.","PeriodicalId":167141,"journal":{"name":"IEEE Symposium on Volume Visualization (Cat. No.989EX300)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125678829","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 87
Adaptive perspective ray casting 自适应透视光线投射
IEEE Symposium on Volume Visualization (Cat. No.989EX300) Pub Date : 1998-10-01 DOI: 10.1145/288126.288154
K. Kreeger, I. Bitter, F. Dachille, Baoquan Chen, A. Kaufman
{"title":"Adaptive perspective ray casting","authors":"K. Kreeger, I. Bitter, F. Dachille, Baoquan Chen, A. Kaufman","doi":"10.1145/288126.288154","DOIUrl":"https://doi.org/10.1145/288126.288154","url":null,"abstract":"We present a method to accurately and efficiently perform perspective volumetric ray casting of uniform regular datasets, called Exponential-Region (ER) Perspective. Unlike previous methods which undersample, oversample, or approximate the data, our method near uniformly samples the data throughout the viewing volume. In addition, it gains algorithmic advantages from a regular sampling pattern and cache-coherent read access, making it an algorithm well suited for implementation on hardware architectures for volume rendering. We qualify the algorithm by its filtering characteristics and demonstrate its effectiveness by contrasting its antialiasing quality and timing with other perspective ray casting methods.","PeriodicalId":167141,"journal":{"name":"IEEE Symposium on Volume Visualization (Cat. No.989EX300)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123917262","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 34
Probabilistic segmentation of volume data for visualization using SOM-PNN classifier 基于SOM-PNN分类器的体数据可视化概率分割
IEEE Symposium on Volume Visualization (Cat. No.989EX300) Pub Date : 1998-10-01 DOI: 10.1145/288126.288162
Feng Ma, Wenping Wang, W. W. Tsang, Zesheng Tang, Shaowei Xia, Xin Tong
{"title":"Probabilistic segmentation of volume data for visualization using SOM-PNN classifier","authors":"Feng Ma, Wenping Wang, W. W. Tsang, Zesheng Tang, Shaowei Xia, Xin Tong","doi":"10.1145/288126.288162","DOIUrl":"https://doi.org/10.1145/288126.288162","url":null,"abstract":"We present a new probabilistic classifier, called SOM-PNN classifier, for volume data classification and visualization. The new classifier produces probabilistic classification with Bayesian confidence measure which is highly desirable in volume rendering. Based on the SOM map trained with a large training data set, our SOM-PNN classifier performs the probabilistic classification using the PNN algorithm. This combined use of SOM and PNN overcomes the shortcomings of the parametric methods, the nonparametric methods, and the SOM method. The proposed SOM-PNN classifier has been used to segment the CT sloth data and the 20 human MRI brain volumes resulting in much more informative 3D rendering with more details and less artifacts than other methods. Numerical comparisons demonstrate that the SOM-PNN classifier is a fast, accurate and probabilistic classifier for volume rendering.","PeriodicalId":167141,"journal":{"name":"IEEE Symposium on Volume Visualization (Cat. No.989EX300)","volume":"20 6","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131958006","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 24
Adding shadows to a texture-based volume renderer 在基于纹理的体渲染器中添加阴影
IEEE Symposium on Volume Visualization (Cat. No.989EX300) Pub Date : 1998-10-01 DOI: 10.1145/288126.288149
U. Behrens, R. Ratering
{"title":"Adding shadows to a texture-based volume renderer","authors":"U. Behrens, R. Ratering","doi":"10.1145/288126.288149","DOIUrl":"https://doi.org/10.1145/288126.288149","url":null,"abstract":"Texture based volume rendering is a technique to efficiently visualize volumetric data using texture mapping hardware. We present an algorithm that extends this approach to render shadows for the volume. The algorithm takes advantage of fast frame buffer operations which modern graphics hardware offers, but does not depend on any special purpose hardware. The visual impression of the final image is significantly improved by bringing more structure and three dimensional information into the often foggyish appearance of texture based volume renderings. Although the algorithm does not perform lighting calculations, the resulting image has a shaded appearance, which is a further visual cue to spatial understanding of the data and lets the images appear more realistic. As calculating the shadows is independent of the visualization process it can be applied to any form of volume visualization, though volume rendering based on two- or three-dimensional texture mapping hardware makes the most sense. Compared to unshadowed texture based volume rendering, performance decreases by less than 50%, which is still sufficient to guarantee interactive manipulation of the volume data. In the special case where only the camera is moving with the light position fixed to the scene there is no performance decrease at all, because recalculation has only to be done if the position of the light source with respect to the volume changes.","PeriodicalId":167141,"journal":{"name":"IEEE Symposium on Volume Visualization (Cat. No.989EX300)","volume":"129 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133636597","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 89
Volume animation using the skeleton tree 使用骨架树的体积动画
IEEE Symposium on Volume Visualization (Cat. No.989EX300) Pub Date : 1998-10-01 DOI: 10.1145/288126.288152
N. Gagvani, Deepak R. Kenchammana-Hosekote, D. Silver
{"title":"Volume animation using the skeleton tree","authors":"N. Gagvani, Deepak R. Kenchammana-Hosekote, D. Silver","doi":"10.1145/288126.288152","DOIUrl":"https://doi.org/10.1145/288126.288152","url":null,"abstract":"We describe a technique to animate volumes using a volumetric skeleton. The skeleton is computed from the actual volume, based on a reversible thinning procedure using the distance transform. Polygons are never computed, and the entire process remains in the volume domain. The skeletal points are connected and arranged in a \"skeleton tree\", which can be used for articulation in an animation program. The full volume object is regrown from the transformed skeletal points. Since the skeleton is an intuitive mechanism for animation, the animator deforms the skeleton and causes corresponding deformations in the volume object. The volumetric skeleton can also be used for volume morphing, automatic path navigation, volume smoothing and compression/decimation.","PeriodicalId":167141,"journal":{"name":"IEEE Symposium on Volume Visualization (Cat. No.989EX300)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125144162","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 51
An exact interactive time visibility ordering algorithm for polyhedral cell complexes 多面体细胞复合体的精确交互时间可视性排序算法
IEEE Symposium on Volume Visualization (Cat. No.989EX300) Pub Date : 1998-10-01 DOI: 10.1145/288126.288170
Cláudio T. Silva, Joseph S. B. Mitchell, Peter L. Williams
{"title":"An exact interactive time visibility ordering algorithm for polyhedral cell complexes","authors":"Cláudio T. Silva, Joseph S. B. Mitchell, Peter L. Williams","doi":"10.1145/288126.288170","DOIUrl":"https://doi.org/10.1145/288126.288170","url":null,"abstract":"A visibility ordering of a set of objects, from a given viewpoint, is a total order on the objects such that if object a obstructs object b, then b precedes a in the ordering. Such orderings are extremely useful for rendering volumetric data. The authors present an algorithm that generates a visibility ordering of the cells of an unstructured mesh, provided that the cells are convex polyhedra and nonintersecting, and that the visibility ordering graph does not contain cycles. The overall mesh may be nonconvex and it may have disconnected components. The technique employs the sweep paradigm to determine an ordering between pairs of exterior (mesh boundary) cells which can obstruct one another. It then builds on Williams' (1992) MPVO algorithm which exploits the ordering implied by adjacencies within the mesh. The partial ordering of the exterior cells found by sweeping is used to augment the DAG created in Phase II of the MPVO algorithm. The method thus removes the assumption of the MPVO algorithm that the mesh be convex and connected, and thereby allows one to extend the MPVO algorithm, without using the heuristics that were originally suggested by Williams (and are sometimes problematic). The resulting XMPVO algorithm has been analyzed, and a variation of it has been implemented for unstructured tetrahedral meshes; they provide experimental evidence that it performs very well in practice.","PeriodicalId":167141,"journal":{"name":"IEEE Symposium on Volume Visualization (Cat. No.989EX300)","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133784068","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 64
Edge preservation in volume rendering using splatting 使用飞溅的体绘制中的边缘保存
IEEE Symposium on Volume Visualization (Cat. No.989EX300) Pub Date : 1998-10-01 DOI: 10.1145/288126.288158
Jian Huang, R. Crawfis, D. Stredney
{"title":"Edge preservation in volume rendering using splatting","authors":"Jian Huang, R. Crawfis, D. Stredney","doi":"10.1145/288126.288158","DOIUrl":"https://doi.org/10.1145/288126.288158","url":null,"abstract":"The paper presents a method to preserve sharp edge details in splatting for volume rendering. Conventional splatting algorithms produce fuzzy images for views close to the volume model. The lack of details in such views greatly hinders study and manipulation of data sets using virtual navigation. Our method applies a nonlinear warping to the footprints of conventional splat and builds a table of footprints for different possible edge positions and edge strengths. When rendering, we pick a footprint from the table for each splat, based on the relative position of the voxel to the closest edge. Encouraging results have been achieved both for synthetic data and medical data.","PeriodicalId":167141,"journal":{"name":"IEEE Symposium on Volume Visualization (Cat. No.989EX300)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125788504","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
3D scan conversion of CSG models into distance volumes CSG模型到距离体的三维扫描转换
IEEE Symposium on Volume Visualization (Cat. No.989EX300) Pub Date : 1998-10-01 DOI: 10.1145/288126.288137
D. Breen, S. Mauch, Ross T. Whitaker
{"title":"3D scan conversion of CSG models into distance volumes","authors":"D. Breen, S. Mauch, Ross T. Whitaker","doi":"10.1145/288126.288137","DOIUrl":"https://doi.org/10.1145/288126.288137","url":null,"abstract":"A distance volume is a volume dataset where the value stored at each voxel is the shortest distance to the surface of the object being represented by the volume. Distance volumes are a useful representation in a number of computer graphics applications. We present a technique for generating a distance volume with sub-voxel accuracy from one type of geometric model, a constructive solid geometry (CSG) model consisting of superellipsoid primitives. The distance volume is generated in a two step process. The first step calculates the shortest distance to the CSG model at a set of points within a narrow band around the evaluated surface. Additionally, a second set of points, labeled the zero set, which lies on the CSG model's surface are computed. A point in the zero set is associated with each point in the narrow band. Once the narrow band and zero set are calculated, a fast marching method is employed to propagate the shortest distance and closest point information out to the remaining voxels in the volume. Our technique has been used to scan convert a number of CSG models, producing distance volumes which have been utilized in a variety of computer graphics applications, e.g. CSG surface evaluation, offset surface generation, and 3D model morphing.","PeriodicalId":167141,"journal":{"name":"IEEE Symposium on Volume Visualization (Cat. No.989EX300)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125251563","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 129
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信