Chiara Hergl, T. Nagel, O. Kolditz, G. Scheuermann
{"title":"Visualization of Symmetries in Fourth-Order Stiffness Tensors","authors":"Chiara Hergl, T. Nagel, O. Kolditz, G. Scheuermann","doi":"10.1109/VISUAL.2019.8933592","DOIUrl":"https://doi.org/10.1109/VISUAL.2019.8933592","url":null,"abstract":"Many materials like wood, biological tissue, composites or rock have anisotropic mechanical properties. They become increasingly important in modern material, earth, and life sciences. The stress-strain response of such materials can be characterized (to first-order) by the three-dimensional fourth-order stiffness tensor. There are different anisotropy classes, i.e. material symmetries, that differ in the number and orientation of symmetry planes characteristic of the material. A three-dimensional fourth-order stiffness tensor of a hyperelastic material has up to 21 independent coefficients representing both moduli and orientation information which challenges any visualization method. Therefore, we use a fourth-order tensor decomposition to compute the anisotropy classes and the position of the corresponding symmetry planes. To facilitate judgment of the significance of the amount of anisotropy, we construct an isotropic material. Based on these computations, we design a glyph that represents the stiffness tensor. We demonstrate our method in a finite deformation setting of an initially isotropic hyperelastic material of Ogden class which is often modeling biological tissue. Upon deformation, the stiffness tensor can evolve along with its symmetry creating an inhomogeneous, unsteady fourth-order tensor field in three dimensions.","PeriodicalId":192801,"journal":{"name":"2019 IEEE Visualization Conference (VIS)","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121601051","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Amon Ge, Hyeju Jang, G. Carenini, K. Ho, Young ji Lee
{"title":"OCTVis: Ontology-Based Comparison of Topic Models","authors":"Amon Ge, Hyeju Jang, G. Carenini, K. Ho, Young ji Lee","doi":"10.1109/VISUAL.2019.8933646","DOIUrl":"https://doi.org/10.1109/VISUAL.2019.8933646","url":null,"abstract":"Evaluating topic modeling results requires communication between domain and NLP experts. OCTVis is a visual interface to compare the quality of two topic models when mapped against a domain ontology. Its design is based on detailed data and task models, and was tested in a case study in the healthcare domain.","PeriodicalId":192801,"journal":{"name":"2019 IEEE Visualization Conference (VIS)","volume":"352 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124449183","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"VisWall: Visual Data Exploration Using Direct Combination on Large Touch Displays","authors":"Mallika Agarwal, Arjun Srinivasan, J. Stasko","doi":"10.1109/VISUAL.2019.8933673","DOIUrl":"https://doi.org/10.1109/VISUAL.2019.8933673","url":null,"abstract":"An increasing number of data visualization tools are being designed for touch-based devices ranging from smartwatches to large wall-sized displays. While most of these tools have focused on exploring novel techniques to manually specify visualizations, recent touch-based visualization systems have begun to explore interface and interaction techniques for attribute-based visualization recommendations as a way to aid users (particularly novices) during data exploration. Advancing this line of work, we present a visualization system, VisWall, that enables visual data exploration in both single user and co-located collaborative settings on large touch displays. Coupling the concepts of direct combination and derivable visualizations, VisWall enables rapid construction of multivariate visualizations using attributes of previously created visualizations. By blending visualization recommendations and naturalistic interactions, VisWall seeks to help users visually explore their data by allowing them to focus more on aspects of the data (particularly, data attributes) rather than specifying and reconfiguring visualizations. We discuss the design, interaction techniques, and operations employed by VisWall along with a scenario of how these can be used to facilitate various tasks during visual data exploration.","PeriodicalId":192801,"journal":{"name":"2019 IEEE Visualization Conference (VIS)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121034244","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Designing Visual Guides for Casual Listeners of Live Orchestral Music","authors":"Catherine Solis, Fahimeh Rajabiyazdi, Fanny Chevalier","doi":"10.1109/VISUAL.2019.8933734","DOIUrl":"https://doi.org/10.1109/VISUAL.2019.8933734","url":null,"abstract":"The experience of attending live orchestra performances is rich in cultural heritage and can be emotionally moving; however, for those unfamiliar with classical music, it can be intimidating. In this work, we explore the use of visual listening guides to supplement live performances with information that supports the casual listener’s increased engagement. We employ human-centred design practices to evaluate a currently implemented guide with users, from which we extracted design requirements. We then identify dimensions of a music piece that may be visualized and created sample guide designs. Finally, we presented these designs to experts of visualization and music theory. Feedback from the two evaluations informs design implications to consider when creating visual guides of classical music for casual listeners.","PeriodicalId":192801,"journal":{"name":"2019 IEEE Visualization Conference (VIS)","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128139432","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Hi-D Maps: An Interactive Visualization Technique for Multi-Dimensional Categorical Data","authors":"Radi Muhammad Reza, Benjamin Watson","doi":"10.1109/VISUAL.2019.8933709","DOIUrl":"https://doi.org/10.1109/VISUAL.2019.8933709","url":null,"abstract":"In this paper, we present Hi-D maps, a novel method for the visualization of multi-dimensional categorical data. Our work addresses the scarcity of techniques for visualizing a large number of data-dimensions in an effective and space-efficient manner. We have mapped the full data-space onto a 2D regular polygonal region. The polygon is cut hierarchically with lines parallel to a user-controlled, ordered sequence of sides, each representing a dimension. We have used multiple visual cues such as orientation, thickness, color, countable glyphs, and text to depict cross-dimensional information. We have added interactivity and hierarchical browsing to facilitate flexible exploration of the display: small areas can be scrutinized for details. Thus, our method is also easily extendable to visualize hierarchical information. Our glyph animations add an engaging aesthetic during interaction. Like many visualizations, Hi-D maps become less effective when a large number of dimensions stresses perceptual limits, but Hi-D maps may add clarity before those limits are reached.","PeriodicalId":192801,"journal":{"name":"2019 IEEE Visualization Conference (VIS)","volume":"70 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127990401","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"TeleGam: Combining Visualization and Verbalization for Interpretable Machine Learning","authors":"Fred Hohman, Arjun Srinivasan, S. Drucker","doi":"10.1109/VISUAL.2019.8933695","DOIUrl":"https://doi.org/10.1109/VISUAL.2019.8933695","url":null,"abstract":"While machine learning (ML) continues to find success in solving previously-thought hard problems, interpreting and exploring ML models remains challenging. Recent work has shown that visualizations are a powerful tool to aid debugging, analyzing, and interpreting ML models. However, depending on the complexity of the model (e.g., number of features), interpreting these visualizations can be difficult and may require additional expertise. Alternatively, textual descriptions, or verbalizations, can be a simple, yet effective way to communicate or summarize key aspects about a model, such as the overall trend in a model’s predictions or comparisons between pairs of data instances. With the potential benefits of visualizations and verbalizations in mind, we explore how the two can be combined to aid ML interpretability. Specifically, we present a prototype system, TeleGam, that demonstrates how visualizations and verbalizations can collectively support interactive exploration of ML models, for example, generalized additive models (GAMs). We describe TELEGAM’s interface and underlying heuristics to generate the verbalizations. We conclude by discussing how TeleGam can serve as a platform to conduct future studies for understanding user expectations and designing novel interfaces for interpretable ML.","PeriodicalId":192801,"journal":{"name":"2019 IEEE Visualization Conference (VIS)","volume":"68 2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131586731","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Would You Like A Chart With That? Incorporating Visualizations into Conversational Interfaces","authors":"Marti A. Hearst, Melanie Tory","doi":"10.1109/VISUAL.2019.8933766","DOIUrl":"https://doi.org/10.1109/VISUAL.2019.8933766","url":null,"abstract":"Conversational interfaces, such as chatbots, are increasing in prevalence, and have been shown to be preferred by and help users to complete tasks more efficiently than standard web interfaces in some cases. However, little is understood about if and how information should be visualized during the course of an interactive conversation. This paper describes studies in which participants report their preferences for viewing visualizations in chat-style interfaces when answering questions about comparisons and trends. We find a significant split in preferences among participants; approximately 40% prefer not to see charts and graphs in the context of a conversational interface. For those who do prefer to see charts, most preferred to see additional supporting context beyond the direct answer to the question. These results have important ramifications for the design of conversational interfaces to data.","PeriodicalId":192801,"journal":{"name":"2019 IEEE Visualization Conference (VIS)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128279887","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Evidence for Area as the Primary Visual Cue in Pie Charts","authors":"Robert Kosara","doi":"10.1109/VISUAL.2019.8933547","DOIUrl":"https://doi.org/10.1109/VISUAL.2019.8933547","url":null,"abstract":"The long-standing assumption of angle as the primary visual cue used to read pie charts has recently been called into question. We conducted a controlled, preregistered study using parallel-projected 3D pie charts. Angle, area, and arc length differ dramatically when projected and change over a large range of values. Modeling these changes and comparing them to study participants’ estimates allows us to rank the different visual cues by model fit. Area emerges as the most likely cue used to read pie charts.","PeriodicalId":192801,"journal":{"name":"2019 IEEE Visualization Conference (VIS)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127990927","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pavel A. Govyadinov, Tasha R Womack, J. Eriksen, D. Mayerich, Guoning Chen
{"title":"Graph-Assisted Visualization of Microvascular Networks","authors":"Pavel A. Govyadinov, Tasha R Womack, J. Eriksen, D. Mayerich, Guoning Chen","doi":"10.1109/VISUAL.2019.8933682","DOIUrl":"https://doi.org/10.1109/VISUAL.2019.8933682","url":null,"abstract":"Microvessels are frequent targets for research into tissue development and disease progression. These complex and subtle differences between networks are currently difficult to visualize, making sample comparisons subjective and difficult to quantify. These challenges are due to the structure of microvascular networks, which are sparse but space-filling. This results in a complex and interconnected mesh that is difficult to represent and impractical to interpret using conventional visualization techniques. We develop a bi-modal visualization framework, leveraging graph-based and geometry-based techniques to achieve interactive visualization of microvascular networks. This framework allows researchers to objectively interpret the complex and subtle variations that arise when comparing microvascular networks.","PeriodicalId":192801,"journal":{"name":"2019 IEEE Visualization Conference (VIS)","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132674929","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Disentangled Representation of Data Distributions in Scatterplots","authors":"Jaemin Jo, Jinwook Seo","doi":"10.1109/VISUAL.2019.8933670","DOIUrl":"https://doi.org/10.1109/VISUAL.2019.8933670","url":null,"abstract":"We present a data-driven approach to obtain a disentangled and interpretable representation that can characterize bivariate data distributions of scatterplots. We first collect tabular datasets from the Web and build a training corpus consisting of over one million scatterplot images. Then, we train a state-of-the-art disentangling model, β-variational autoencoder, to derive a disentangled representation of the scatterplot images. The main output of this work is a list of 32 representative features that can capture the underlying structures of bivariate data distributions. Through latent traversals, we seek for high-level semantics of the features and compare them to previous human-derived concepts such as scagnostics measures. Finally, using the 32 features as an input, we build a simple neural network to predict the perceptual distances between scatterplots that were previously scored by human annotators. We found Pearson’s correlation coefficient between the predicted and perceptual distances was above 0.75, which indicates the effectiveness of our representation in the quantitative characterization of scatterplots.","PeriodicalId":192801,"journal":{"name":"2019 IEEE Visualization Conference (VIS)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115480011","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}