Zoltán Orémus, Kahin Akram Hassan, Jiří Chmelík, M. Kňažková, J. Byška, R. Raidou, B. Kozlíková
{"title":"PINGU Principles of Interactive Navigation for Geospatial Understanding","authors":"Zoltán Orémus, Kahin Akram Hassan, Jiří Chmelík, M. Kňažková, J. Byška, R. Raidou, B. Kozlíková","doi":"10.1109/PacificVis48177.2020.7567","DOIUrl":"https://doi.org/10.1109/PacificVis48177.2020.7567","url":null,"abstract":"Monitoring conditions in the periglacial areas of Antarctica helps geographers and geologists to understand physical processes associated with mesoscale land systems. Analyzing these unique temporal datasets poses a significant challenge for domain experts, due to the complex and often incomplete data, for which corresponding exploratory tools are not available. In this paper, we present a novel visual analysis tool for extraction and interactive exploration of temporal measurements captured at the polar station at the James Ross Island in Antarctica. The tool allows domain experts to quickly extract information about the snow level, originating from a series of photos acquired by trail cameras. Using linked views, the domain experts can interactively explore and combine this information with other spatial and non-spatial measures, such as temperature or wind speed, to reveal the interplay of periglacial and aeolian processes. An abstracted interactive map of the area indicates the position of measurement spots to facilitate navigation. The design of the tool was made in tight collaboration with geographers, which resulted in an early prototype, tested in the pilot study. The following version of the tool and its usability has been evaluated in the user study with five domain experts and their feedback was incorporated into the final version, presented in this paper. This version was again discussed with two experts in an informal interview. Within these evaluations, they confirmed the significant benefit of the tool for their research tasks.","PeriodicalId":322092,"journal":{"name":"2020 IEEE Pacific Visualization Symposium (PacificVis)","volume":"67 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124356845","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Wenbin He, Teng-Yok Lee, J. Baar, K. Wittenburg, Han-Wei Shen
{"title":"DynamicsExplorer: Visual Analytics for Robot Control Tasks involving Dynamics and LSTM-based Control Policies","authors":"Wenbin He, Teng-Yok Lee, J. Baar, K. Wittenburg, Han-Wei Shen","doi":"10.1109/PacificVis48177.2020.7127","DOIUrl":"https://doi.org/10.1109/PacificVis48177.2020.7127","url":null,"abstract":"Deep reinforcement learning (RL), where a policy represented by a deep neural network is trained, has shown some success in playing video games and chess. However, applying RL to real-world tasks like robot control is still challenging. Because generating a massive number of samples to train control policies using RL on real robots is very expensive, hence impractical, it is common to train in simulations, and then transfer to real environments. The trained policy, however, may fail in the real world because of the difference between the training and the real environments, especially the difference in dynamics. To diagnose the problems, it is crucial for experts to understand (1) how the trained policy behaves under different dynamics settings, (2) which part of the policy affects the behaviors the most when the dynamics setting changes, and (3) how to adjust the training procedure to make the policy robust.This paper presents DynamicsExplorer, a visual analytics tool to diagnose the trained policy on robot control tasks under different dynamics settings. DynamicsExplorer allows experts to overview the results of multiple tests with different dynamics-related parameter settings so experts can visually detect failures and analyze the sensitivity of different parameters. Experts can further examine the internal activations of the policy for selected tests and compare the activations between success and failure tests. Such comparisons help experts form hypotheses about the policy and allows them to verify the hypotheses via DynamicsExplorer. Multiple use cases are presented to demonstrate the utility of DynamicsExplorer.","PeriodicalId":322092,"journal":{"name":"2020 IEEE Pacific Visualization Symposium (PacificVis)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121177024","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Immersive WYSIWYG (What You See is What You Get) Volume Visualization","authors":"Song Wang, D. Zhu, Haozhou Yu, Yadong Wu","doi":"10.1109/PacificVis48177.2020.1001","DOIUrl":"https://doi.org/10.1109/PacificVis48177.2020.1001","url":null,"abstract":"Extended to immersive environment, volume visualization has the analytical superiority in spatial immersion, user engagement, multidimensional awareness and other aspects. But in a highly immersive virtual environment, traditional single-channel precise interactive methods cannot be applied directly to the immersive environment. Inspired by how users typically interact with everyday objects, a novel non-contact gesture interaction method base on What You See is What You Get(WYSIWYG)for volume rendering results is proposed in this paper. Just likes grab interaction in real scene, a full set of tools have been developed to enable direct volume rendering manipulation of color, saturation, contrast, brightness, and other optical properties by gestural motions in our method. Simultaneously, in order to improve the interactive experience in immersive environment, the evaluation model of motion comfort is introduced to design the interactive hand gestures, the cursor model is defined to estimating the gesture state combined with context gestural motions. Finally, the test platform is established with Oculus Rift + Leap Motion to verify the functionality and effectiveness of our method in improving the visual cognitive ability for volume visualization.","PeriodicalId":322092,"journal":{"name":"2020 IEEE Pacific Visualization Symposium (PacificVis)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124113340","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jiang Zhang, Changhe Yang, Yanda Li, Li Chen, Xiaoru Yuan
{"title":"LBVis: Interactive Dynamic Load Balancing Visualization for Parallel Particle Tracing","authors":"Jiang Zhang, Changhe Yang, Yanda Li, Li Chen, Xiaoru Yuan","doi":"10.1109/PacificVis48177.2020.1029","DOIUrl":"https://doi.org/10.1109/PacificVis48177.2020.1029","url":null,"abstract":"We propose an interactive visual analytical approach to exploring and diagnosing the dynamic load balance (data and task partition) process of parallel particle tracing in flow visualization. To understand the complex nature of the parallel processes, it is necessary to integrate the information of the behaviors and patterns of the computing processes, data changes and movements, task status and exchanges, and gain the insight of the relationships among them. In our proposed approach, the data and task behaviors are visualized through a graph with a fine-designed layout, in which node glyphs are dedicated to showing the status of processes and the links represent the data or task transfer between different computation rounds and processes. User interactions are supported to facilitate the exploration of performance analysis. We provide a case study to demonstrate that the proposed approach enables users to identify the bottlenecks during this process, and thus help optimize the related algorithms.","PeriodicalId":322092,"journal":{"name":"2020 IEEE Pacific Visualization Symposium (PacificVis)","volume":"114 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128151867","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Interactive Assigning of Conference Sessions with Visualization and Topic Modeling","authors":"Yun Han, Zhenhuang Wang, Siming Chen, Guozheng Li, X. Zhang, Xiaoru Yuan","doi":"10.1109/PacificVis48177.2020.1027x","DOIUrl":"https://doi.org/10.1109/PacificVis48177.2020.1027x","url":null,"abstract":"Creating thematic sessions based on accepted papers is important to the success of a conference. Facing a large number of papers from multiple topics, conference organizers need to identify the topics of papers and group them into sessions by considering the constraints on session numbers and paper numbers in individual sessions. In this paper, we present a system using visualization and topic modeling to help the construction of conference sessions. The system provides multiple automatically generated session schemes and allows users to create, evaluate, and manipulate paper sessions with given constraints. A case study based on our system on the VAST papers shows that our method can help users successfully construct coherent conference sessions. In addition to conference session management, our method can be extended to other tasks, such as event and class schedule.","PeriodicalId":322092,"journal":{"name":"2020 IEEE Pacific Visualization Symposium (PacificVis)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134065568","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Seok-Hee Hong, P. Eades, Marnijati Torkel, Weidong Huang, C. Cifuentes
{"title":"Dynamic Graph Map Animation","authors":"Seok-Hee Hong, P. Eades, Marnijati Torkel, Weidong Huang, C. Cifuentes","doi":"10.1109/PacificVis48177.2020.1042","DOIUrl":"https://doi.org/10.1109/PacificVis48177.2020.1042","url":null,"abstract":"Recent methods for visualizing graphs have used a map metaphor: vertices are represented as regions in the plane, and proximity between regions represents edges between vertices.In many real world applications, the data changes over time, resulting in a dynamic map. This paper introduces new methods for representing dynamic graphs with map animation. More specifically, we present three different animation methods: MDSV (Multidimensional scaling - Voronoi), TV (Tutte - Voronoi) and TD (Tutte - dual). These methods support operations such as addition and deletion of vertices and edges. Each of our methods uses a kind of matrix interpolation.","PeriodicalId":322092,"journal":{"name":"2020 IEEE Pacific Visualization Symposium (PacificVis)","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122361935","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Amit Jena, U. Engelke, Tim Dwyer, Venkatesh Raiamanickam, Cécile Paris
{"title":"Uncertainty Visualisation: An Interactive Visual Survey","authors":"Amit Jena, U. Engelke, Tim Dwyer, Venkatesh Raiamanickam, Cécile Paris","doi":"10.1109/PacificVis48177.2020.1014","DOIUrl":"https://doi.org/10.1109/PacificVis48177.2020.1014","url":null,"abstract":"There exists a gulf between the rhetoric in visualisation research about the significance of uncertainty and the inclusion of representations of uncertainty in visualisations used in practice. The graphical representation of uncertainty information has emerged as a problem of great importance in visualisation research. This contribution presents a survey of 286 uncertainty visualisation research publications. All publications are categorised with regard to publication type, publication venue, application domain, target user, and evaluation type. We present an interactive web-based browser that facilitates easy visual search and exploration of the publications included in the survey. We conclude that uncertainty visualisation is severely limited by the quality and scope of uncertainty data, by the limited confidence in the data, and by the perceptual and cognitive confusion that the graphical representation of the data can generate.","PeriodicalId":322092,"journal":{"name":"2020 IEEE Pacific Visualization Symposium (PacificVis)","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134213891","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Distribution-based Particle Data Reduction for In-situ Analysis and Visualization of Large-scale N-body Cosmological Simulations","authors":"Guan Li, Jiayi Xu, Tianchi Zhang, Guihua Shan, Han-Wei Shen, Ko-Chih Wang, Shihong Liao, Zhonghua Lu","doi":"10.1109/PacificVis48177.2020.1186","DOIUrl":"https://doi.org/10.1109/PacificVis48177.2020.1186","url":null,"abstract":"Cosmological N-body simulation is an important tool for scientists to study the evolution of the universe. With the increase of computing power, billions of particles of high space-time fidelity can be simulated by supercomputers. However, limited computer storage can only hold a small subset of the simulation output for analysis, which makes the understanding of the underlying cosmological phenomena difficult. To alleviate the problem, we design an in-situ data reduction method for large-scale unstructured particle data. During the data generation phase, we use a combined k-dimensional partitioning and Gaussian mixture model approach to reduce the data by utilizing probability distributions. We offer a model evaluation criterion to examine the quality of the probabilistic distribution models, which allows us to identify and improve low-quality models. After the in-situ processing, the particle data size is greatly reduced, which satisfies the requirements from the domain experts. By comparing the astronomical attributes and visualizations of the reconstructed data with the raw data, we demonstrate the effectiveness of our in-situ particle data reduction technique.","PeriodicalId":322092,"journal":{"name":"2020 IEEE Pacific Visualization Symposium (PacificVis)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133771770","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Qiaomu Shen, Yanhong Wu, Yuzhe Jiang, Wei Zeng, A. Lau, Anna Vianova, Huamin Qu
{"title":"Visual Interpretation of Recurrent Neural Network on Multi-dimensional Time-series Forecast","authors":"Qiaomu Shen, Yanhong Wu, Yuzhe Jiang, Wei Zeng, A. Lau, Anna Vianova, Huamin Qu","doi":"10.1109/PacificVis48177.2020.2785","DOIUrl":"https://doi.org/10.1109/PacificVis48177.2020.2785","url":null,"abstract":"Recent attempts at utilizing visual analytics to interpret Recurrent Neural Networks (RNNs) mainly focus on natural language processing (NLP) tasks that take symbolic sequences as input. However, many real-world problems like environment pollution forecasting apply RNNs on sequences of multi-dimensional data where each dimension represents an individual feature with semantic meaning such as PM2.5 and SO2. RNN interpretation on multi-dimensional sequences is challenging as users need to analyze what features are important at different time steps to better understand model behavior and gain trust in prediction. This requires effective and scalable visualization methods to reveal the complex many-to-many relations between hidden units and features. In this work, we propose a visual analytics system to interpret RNNs on multi-dimensional time-series forecasts. Specifically, to provide an overview to reveal the model mechanism, we propose a technique to estimate the hidden unit response by measuring how different feature selections affect the hidden unit output distribution. We then cluster the hidden units and features based on the response embedding vectors. Finally, we propose a visual analytics system which allows users to visually explore the model behavior from the global and individual levels. We demonstrate the effectiveness of our approach with case studies using air pollutant forecast applications.","PeriodicalId":322092,"journal":{"name":"2020 IEEE Pacific Visualization Symposium (PacificVis)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124327857","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}