{"title":"Evaluation of symbol contrast in scatterplots","authors":"Jing Li, J. V. Wijk, J. Martens","doi":"10.1109/PACIFICVIS.2009.4906843","DOIUrl":"https://doi.org/10.1109/PACIFICVIS.2009.4906843","url":null,"abstract":"Symbols are frequently used to represent data objects in visualization. An appropriate contrast between symbols is a precondition that determines the efficiency of a visual analysis process. We study the contrast between different types of symbols in the context of scatterplots, based on user testing and a quantitative model for symbol contrast. In total, 32 different symbols were generated by using four sizes, two classes (polygon-and asterisk shaped), and four categories of rotational symmetry; and used three different tasks. From the user test results an internal separation space is established for the symbol types under study. In this space, every symbol is represented by a point, and the visual contrasts defined by task performance between the symbols are represented by the distances between the points. The positions of the points in the space, obtained by Multidimensional Scaling (MDS), reveal the effects of different visual feature scales. Also, larger distances imply better symbol separation for visual tasks, and therefore indicate appropriate choices for symbols. The resulting configurations are discussed, and a number of patterns in the relation between properties of the symbols and the resulting contrast are identified. In short we found that the size effect in the space is not linear and more dominant than shape effect.","PeriodicalId":133992,"journal":{"name":"2009 IEEE Pacific Visualization Symposium","volume":"258 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-04-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123099037","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A self-adaptive treemap-based technique for visualizing hierarchical data in 3D","authors":"Abon Chaudhuri, Han-Wei Shen","doi":"10.1109/PACIFICVIS.2009.4906844","DOIUrl":"https://doi.org/10.1109/PACIFICVIS.2009.4906844","url":null,"abstract":"In this paper, we present a novel adaptive visualization technique where the constituting polygons dynamically change their geometry and other visual attributes depending on user interaction. These changes take place with the objective of conveying required level of detail to the user through each view. Our proposed technique is successfully applied to build a treemap-based but 3D visualization of hierarchical data, a widely used information structure. This new visualization exploits its adaptive nature to address issues like cluttered display, imperceptible hierarchy, lack of smooth zoom-in and out technique which are common in tree visualization. We also present an algorithm which utilizes the flexibility of our proposed technique to deal with occlusion, a problem inherent in any 3D information visualization. On one hand, our work establishes adaptive visualization as a means of displaying tree-structured data in 3D. On the other, it promotes the technique as a potential candidate for being employed to visualize other information structures also.","PeriodicalId":133992,"journal":{"name":"2009 IEEE Pacific Visualization Symposium","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-04-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116630831","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Extending the spring-electrical model to overcome warping effects","authors":"Yifan Hu, Y. Koren","doi":"10.1109/PACIFICVIS.2009.4906847","DOIUrl":"https://doi.org/10.1109/PACIFICVIS.2009.4906847","url":null,"abstract":"The spring-electrical model based force directed algorithm is widely used for drawing undirected graphs, and sophisticated implementations can be very efficient for visualizing large graphs. However, our practical experience shows that in many cases, layout quality suffers as a result of non-uniform vertex density. This gives rise to warping effects in that vertices on the outskirt of the drawing are often closer to each other than those near the center, and branches in a tree-like graph tend to cling together. In this paper we propose algorithms that overcome these effects. The algorithms combine the efficiency and good global structure of the spring-electrical model, with the flexibility of the Kamada-Kawai stress model of in specifying the ideal edge length, and are very effective in overcoming the warping effects.","PeriodicalId":133992,"journal":{"name":"2009 IEEE Pacific Visualization Symposium","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-04-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133921783","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"TugGraph: Path-preserving hierarchies for browsing proximity and paths in graphs","authors":"D. Archambault, T. Munzner, D. Auber","doi":"10.1109/PACIFICVIS.2009.4906845","DOIUrl":"https://doi.org/10.1109/PACIFICVIS.2009.4906845","url":null,"abstract":"Many graph visualization systems use graph hierarchies to organize a large input graph into logical components. These approaches detect features globally in the data and place these features inside levels of a hierarchy. However, this feature detection is a global process and does not consider nodes of the graph near a feature of interest. TugGraph is a system for exploring paths and proximity around nodes and subgraphs in a graph. The approach modifies a pre-existing hierarchy in order to see how a node or subgraph of interest extends out into the larger graph. It is guaranteed to create path-preserving hierarchies, so that the abstraction shown is meaningful with respect to the structure of the graph. The system works well on graphs of hundreds of thousands of nodes and millions of edges. TugGraph is able to present views of this proximal information in the context of the entire graph in seconds, and does not require a layout of the full graph as input.","PeriodicalId":133992,"journal":{"name":"2009 IEEE Pacific Visualization Symposium","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-04-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131410352","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Out-of-core volume rendering for time-varying fields using a space-partitioning time (SPT) tree","authors":"Zhiyan Du, Yi-Jen Chiang, Han-Wei Shen","doi":"10.1109/PACIFICVIS.2009.4906840","DOIUrl":"https://doi.org/10.1109/PACIFICVIS.2009.4906840","url":null,"abstract":"In this paper, we propose a novel out-of-core volume rendering algorithm for large time-varying fields. Exploring temporal and spatial coherences has been an important direction for speeding up the rendering of time-varying data. Previously, there were techniques that hierarchically partition both the time and space domains into a data structure so as to re-use some results from the previous time step in multiresolution rendering; however, it has not been studied on which domain should be partitioned first to obtain a better re-use rate. We address this open question, and show both theoretically and experimentally that partitioning the time domain first is better. We call the resulting structure (a binary time tree as the primary structure and an octree as the secondary structure) the space-partitioning time (SPT) tree. Typically, our SPT-tree rendering has a higher level of details, a higher re-use rate, and runs faster. In addition, we devise a novel cut-finding algorithm to facilitate efficient out-of-core volume rendering using our SPT tree, we develop a novel out-of-core preprocessing algorithm to build our SPT tree I/O-efficiently, and we propose modified error metrics with a theoretical guarantee of a monotonicity property that is desirable for the tree search. The experiments on datasets as large as 25GB using a PC with only 2GB of RAM demonstrated the efficacy of our new approach.","PeriodicalId":133992,"journal":{"name":"2009 IEEE Pacific Visualization Symposium","volume":"63 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-04-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114409718","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Visibility-driven transfer functions","authors":"Carlos D. Correa, K. Ma","doi":"10.1109/PACIFICVIS.2009.4906854","DOIUrl":"https://doi.org/10.1109/PACIFICVIS.2009.4906854","url":null,"abstract":"Direct volume rendering is an important tool for visualizing complex data sets. However, in the process of generating 2D images from 3D data, information is lost in the form of attenuation and occlusion. The lack of a feedback mechanism to quantify the loss of information in the rendering process makes the design of good transfer functions a difficult and time consuming task. In this paper, we present the notion of visibility-driven transfer functions, which are transfer functions that provide a good visibility of features of interest from a given viewpoint. To achieve this, we introduce visibility histograms. These histograms provide graphical cues that intuitively inform the user about the contribution of particular scalar values to the final image. By carefully manipulating the parameters of the opacity transfer function, users can now maximize the visibility of the intervals of interest in a volume data set. Based on this observation, we also propose a semi-automated method for generating transfer functions, which progressively improves a transfer function defined by the user, according to a certain importance metric. Now the user does not have to deal with the tedious task of making small changes to the transfer function parameters, but now he/she can rely on the system to perform these searches automatically. Our methodology can be easily deployed in most visualization systems and can be used together with traditional 1D opacity transfer functions based on scalar values, as well as with multidimensional transfer functions and other more sophisticated rendering algorithms.","PeriodicalId":133992,"journal":{"name":"2009 IEEE Pacific Visualization Symposium","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-04-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131576218","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Visual support for the understanding of simulation processes","authors":"A. Unger, H. Schumann","doi":"10.1109/PACIFICVIS.2009.4906838","DOIUrl":"https://doi.org/10.1109/PACIFICVIS.2009.4906838","url":null,"abstract":"Current visualization systems are typically based on the concept of interactive post-processing. This decoupling of data visualization from the process of data generation offers a flexible application of visualization tools. It can also lead, however, to information loss in the visualization. Therefore, a combination of the visualization of the data generating process with the visualization of the produced data offers significant support for the understanding of the abstract data sets as well as the underlying process. Due to the application-specific characteristics of data generating processes, the task requires tailored visualization concepts. In this work, we focus on the application field of simulating biochemical reaction networks as discrete-event systems. These stochastic processes generate multi-run and multivariate time-series, which are analyzed and compared on three different process levels: model, experiment, and the level of multi-run simulation data, each associated with a broad range of analysis goals. To meet these challenging characteristics, we present visualization concepts specifically tailored to all three process levels. The fundament of all three visualization concepts is a compact view that relates the multi-run simulation data to the characteristics of the model structure and the experiment. The view provides the visualization at the experiment level. The visualization at the model level coordinates multiple instances of this view for the comparison of experiments. At the level of multi-run simulation data, the views gives an overview on the data, which can be analyzed in detail in time-series views suited for the analysis goals. Although we derive our visualization concepts for one concrete simulation process, our general concept of tailoring the visualization concepts to process levels is generally applicable for the visualization of simulation processes.","PeriodicalId":133992,"journal":{"name":"2009 IEEE Pacific Visualization Symposium","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-04-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134487980","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A visual canonical adjacency matrix for graphs","authors":"Hongli Li, G. Grinstein, L. Costello","doi":"10.1109/PACIFICVIS.2009.4906842","DOIUrl":"https://doi.org/10.1109/PACIFICVIS.2009.4906842","url":null,"abstract":"Graph data mining algorithms rely on graph canonical forms to compare different graph structures. These canonical form definitions depend on node and edge labels. In this paper, we introduce a unique canonical visual matrix representation that only depends on a graph's topological information, so that two structurally identical graphs will have exactly the same visual adjacency matrix representation. In this canonical matrix, nodes are ordered based on a Breadth-First Search spanning tree. Special rules and filters are designed to guarantee the uniqueness of an arrangement. Such a unique matrix representation provides persistence and a stability which can be used and harnessed in visualization, especially for data exploration and studies.","PeriodicalId":133992,"journal":{"name":"2009 IEEE Pacific Visualization Symposium","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-04-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124560191","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Lei Shi, Nan Cao, Shixia Liu, Weihong Qian, Li Tan, Guodong Wang, Jimeng Sun, Ching-Yung Lin
{"title":"HiMap: Adaptive visualization of large-scale online social networks","authors":"Lei Shi, Nan Cao, Shixia Liu, Weihong Qian, Li Tan, Guodong Wang, Jimeng Sun, Ching-Yung Lin","doi":"10.1109/PACIFICVIS.2009.4906836","DOIUrl":"https://doi.org/10.1109/PACIFICVIS.2009.4906836","url":null,"abstract":"Visualizing large-scale online social network is a challenging yet essential task. This paper presents HiMap, a system that visualizes it by clustered graph via hierarchical grouping and summarization. HiMap employs a novel adaptive data loading technique to accurately control the visual density of each graph view, and along with the optimized layout algorithm and the two kinds of edge bundling methods, to effectively avoid the visual clutter commonly found in previous social network visualization tools. HiMap also provides an integrated suite of interactions to allow the users to easily navigate the social map with smooth and coherent view transitions to keep their momentum. Finally, we confirm the effectiveness of HiMap algorithms through graph-travesal based evaluations.","PeriodicalId":133992,"journal":{"name":"2009 IEEE Pacific Visualization Symposium","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-04-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115641495","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
T. Peeters, V. Prčkovska, M. Almsick, A. Vilanova, B. H. Romeny
{"title":"Fast and sleek glyph rendering for interactive HARDI data exploration","authors":"T. Peeters, V. Prčkovska, M. Almsick, A. Vilanova, B. H. Romeny","doi":"10.1109/PACIFICVIS.2009.4906851","DOIUrl":"https://doi.org/10.1109/PACIFICVIS.2009.4906851","url":null,"abstract":"High angular resolution diffusion imaging (HARDI) is an emerging magnetic resonance imaging (MRI) technique that overcomes some decisive limitations of its predecessor diffusion tensor imaging (DTI). HARDI can resolve locally more than one direction in the diffusion pattern of water molecules and thereby opens up the opportunity to display and track crossing fibers. Showing the local structure of the reconstructed, angular probability profiles in a fast, detailed, and interactive way can improve the quality of the research in this area and help to move it into clinical application. In this paper we present a novel approach for HARDI glyph visualization or, more generally, for the visualization of any function that resides on a sphere and that can be expressed by a Laplace series. Our GPU-accelerated glyph rendering improves the performance of the traditional way of HARDI glyph visualization as well as the visual quality of the reconstructed data, thus offering interactive HARDI data exploration of the local structure of the white brain matter in-vivo. In this paper we exploit the capabilities of modern GPUs to overcome the large, processor-intensive and memory-consuming data visualization.","PeriodicalId":133992,"journal":{"name":"2009 IEEE Pacific Visualization Symposium","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-04-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131359786","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}