{"title":"Contextual picking of volumetric structures","authors":"P. Kohlmann, S. Bruckner, A. Kanitsar, E. Gröller","doi":"10.1109/PACIFICVIS.2009.4906855","DOIUrl":"https://doi.org/10.1109/PACIFICVIS.2009.4906855","url":null,"abstract":"This paper presents a novel method for the interactive identification of contextual interest points within volumetric data by picking on a direct volume rendered image. In clinical diagnostics the points of interest are often located in the center of anatomical structures. In order to derive the volumetric position which allows a convenient examination of the intended structure, the system automatically extracts contextual meta information from the DICOM (Digital Imaging and Communications in Medicine) images and the setup of the medical workstation. Along a viewing ray for a volumetric picking, the ray profile is analyzed for structures which are similar to predefined templates from a knowledge base. We demonstrate with our results that the obtained position in 3D can be utilized to highlight a structure in 2D slice views, to interactively calculate centerlines of tubular objects, or to place labels at contextually-defined volumetric positions.","PeriodicalId":133992,"journal":{"name":"2009 IEEE Pacific Visualization Symposium","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-04-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122801268","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Structure-aware viewpoint selection for volume visualization","authors":"Y. Tao, Hai Lin, H. Bao, F. Dong, G. Clapworthy","doi":"10.1109/PACIFICVIS.2009.4906856","DOIUrl":"https://doi.org/10.1109/PACIFICVIS.2009.4906856","url":null,"abstract":"Viewpoint selection is becoming a useful part in the volume visualization pipeline, as it further improves the efficiency of data understanding by providing representative viewpoints. We present two structure-aware view descriptors, which are the shape view descriptor and the detail view descriptor, to select the optimal viewpoint with the maximum amount of the structural information. These two proposed structure-aware view descriptors are both based on the gradient direction, as the gradient is a well-defined measurement of boundary structures, which have been proved as features of interest in many applications. The shape view descriptor is designed to evaluate the overall orientation of features of interest. For estimating local details, we employ the bilateral filter to construct the shape volume. The bilateral filter is very effective in smoothing local details and preserving strong boundary structures at the same time. Therefore, large-scale global structures are in the shape volume, while small-scale local details still remain in the original volume. The detail view descriptor measures the amount of visible details on boundary structures in terms of variances in the local structure between the shape volume and the original volume. These two view descriptors can be integrated into a viewpoint selection framework, and this framework can emphasize global structures or local details with flexibility tailored to the user's specific situations. We performed experiments on various types of volume datasets. These experiments verify the effectiveness of our proposed view descriptors, and the proposed viewpoint selection framework actually locates the optimal viewpoints that show the maximum amount of the structural information.","PeriodicalId":133992,"journal":{"name":"2009 IEEE Pacific Visualization Symposium","volume":"95 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-04-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127995413","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Interactive feature extraction and tracking by utilizing region coherency","authors":"C. Muelder, K. Ma","doi":"10.1109/PACIFICVIS.2009.4906833","DOIUrl":"https://doi.org/10.1109/PACIFICVIS.2009.4906833","url":null,"abstract":"The ability to extract and follow time-varying flow features in volume data generated from large-scale numerical simulations enables scientists to effectively see and validate modeled phenomena and processes. Extracted features often take much less storage space and computing resources to visualize. Most feature extraction and tracking methods first identify features of interest in each time step independently, then correspond these features in consecutive time steps of the data. Since these methods handle each time step separately, they do not use the coherency of the feature along the time dimension in the extraction process. In this paper, we present a prediction-correction method that uses a prediction step to make the best guess of the feature region in the subsequent time step, followed by growing and shrinking the border of the predicted region to coherently extract the actual feature of interest. This method makes use of the temporal-space coherency of the data to accelerate the extraction process while implicitly solving the tedious correspondence problem that previous methods focus on. Our method is low cost with very little storage overhead, and thus facilitates interactive or runtime extraction and visualization, unlike previous methods which were largely suited for batch-mode processing due to high computational cost.","PeriodicalId":133992,"journal":{"name":"2009 IEEE Pacific Visualization Symposium","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-04-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116856826","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Daniel Patel, M. Haidacher, Jean-Paul Balabanian, E. Gröller
{"title":"Moment curves","authors":"Daniel Patel, M. Haidacher, Jean-Paul Balabanian, E. Gröller","doi":"10.1109/PACIFICVIS.2009.4906857","DOIUrl":"https://doi.org/10.1109/PACIFICVIS.2009.4906857","url":null,"abstract":"We define a transfer function based on the first and second statistical moments. We consider the evolution of the mean and variance with respect to a growing neighborhood around a voxel. This evolution defines a curve in 3D for which we identify important trends and project it back to 2D. The resulting 2D projection can be brushed for easy and robust classification of materials and material borders. The transfer function is applied to both CT and MR data.","PeriodicalId":133992,"journal":{"name":"2009 IEEE Pacific Visualization Symposium","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-04-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116899429","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}