Zhiguang Zhou, Xinlong Zhang, Zhendong Yang, Yuanyuan Chen, Yuhua Liu, Jin Wen, Binjie Chen, Ying Zhao, W. Chen
{"title":"Visual Abstraction of Geographical Point Data with Spatial Autocorrelations","authors":"Zhiguang Zhou, Xinlong Zhang, Zhendong Yang, Yuanyuan Chen, Yuhua Liu, Jin Wen, Binjie Chen, Ying Zhao, W. Chen","doi":"10.1109/VAST50239.2020.00011","DOIUrl":"https://doi.org/10.1109/VAST50239.2020.00011","url":null,"abstract":"Scatterplots are always employed to visualize geographical point datasets, which often suffer from an overdraw problem due to the increase of data sizes. A variety of sampling strategies have been proposed to reduce overdraw and visual clutter with the spatial densities of points taken into account. However, informative attributes associated with the points also play significant roles in the exploration of geographical datasets. In this paper, we propose an attribute-based abstraction method to simplify the cluttered visualization of large-scale geographical points. Spatial autocorrelations are utilized to measure the attribute relationships of points in local areas, and a novel attribute-based sampling model is designed to generate a subset of points to preserve both density and attribute characteristics of original geographical points. A set of visual designs and user-friendly interactions are implemented, enabling users to capture the spatial distribution of geographical points and get deeper insights into the attribute features across local areas. Case studies and quantitative comparisons based on the real-world datasets further demonstrate the effectiveness of our method in the abstraction and exploration of large-scale geographical point datasets.","PeriodicalId":244967,"journal":{"name":"2020 IEEE Conference on Visual Analytics Science and Technology (VAST)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128953046","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Visual Analytics of Multivariate Event Sequence Data in Racquet Sports","authors":"Jiang Wu, Ziyang Guo, Zuobin Wang, Qingyang Xu, Yingcai Wu","doi":"10.1109/VAST50239.2020.00009","DOIUrl":"https://doi.org/10.1109/VAST50239.2020.00009","url":null,"abstract":"In this work, we propose a generic visual analytics framework to support tactic analysis based on data collected from racquet sports (such as tennis and badminton). The proposed approach models each rally in a game as a sequence of hits (i.e., events) until one athlete scores a point. Each hit can be described with a set of attributes, such as the positions of the ball and the techniques used to hit the ball (such as drive and volley in tennis). Thus, the mentioned sequence of hits can be viewed as a multivariate event sequence. By detecting and analyzing the multivariate subsequences that frequently occur in the rallies (namely, tactical patterns), athletes can gain insights into the playing styles adopted by their opponents, and therefore help them identify systematic weaknesses of the opponents and develop counter strategies in matches. To support such analysis effectively, we propose a steerable multivariate sequential pattern mining algorithm with adjustable weights over event attributes, such that the domain expert can obtain frequent tactical patterns according to the attributes specified by himself. We also propose a re-configurable glyph design to help users simultaneously analyze multiple attributes of the hits. The framework further supports comparative analysis of the tactical patterns, e.g., for different athletes or the same athlete playing under different conditions. By applying the framework on two datasets collected in tennis and badminton matches, we demonstrate that the system is generic and effective for tactic analysis in sports and can help identify signature techniques used by individual athletes. Finally, we discuss the strengths and limitations of the proposed approach based on the feedback from the domain experts.","PeriodicalId":244967,"journal":{"name":"2020 IEEE Conference on Visual Analytics Science and Technology (VAST)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126234599","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Brain Fisher, Ross Maciejewski, S. Miksch, Jing Yang, Kristin A. Cook, R. J. Crouser
{"title":"Preface to IEEE VAST 2020 Conference Track and VAST Challenge","authors":"Brain Fisher, Ross Maciejewski, S. Miksch, Jing Yang, Kristin A. Cook, R. J. Crouser","doi":"10.1109/vast50239.2020.00005","DOIUrl":"https://doi.org/10.1109/vast50239.2020.00005","url":null,"abstract":"This is the 15th edition of IEEE Visual Analytics Science and Technology (VAST). Begun in 2006 as an IEEE Symposium at VIS, it is now in its 11th year as an IEEE Conference. It continues to be the leading forum for Visual Analytics research, defined as the science of analytical reasoning supported by interactive visual interfaces. VAST represents research pushing the boundaries of the state of the art in theory and foundations of visual data analysis, techniques and algorithms, empirical and design studies, as well as systems and applications. VAST in 2020 continues to feature its successful conference paper track, in addition to the TVCG paper track. The goal of this track is to increase the diversity of Visual Analytics applications and to better support participation of interdisciplinary researchers. It provides innovative advances and applications in Visual Analytics. The VAST 2020 Program Committee comprised 59 senior experts from the field. 210 complete submissions entered the two-stage review cycle, from which VAST eventually accepted 51 papers for the TVCG track, and 10 for the conference track. The conference track papers are published as part of the VIS USB proceedings, and submitted to the IEEE Digital Library for archival publishing. The accepted papers contribute interesting, timely ideas and results to the VAST 2020 conference sessions on Fairness and AI, Interactive Machine Learning, Text Analysis, Graphs, Evaluation and Theory, as well as Applications. Now in its 15th year, the IEEE VAST Challenge continues to pose new challenges to the visual analytics research community to encourage innovation in interactive visual representation, data transformation, and analytical reasoning. This year’s three minichallenges centered around a global internet outage, and tested participants’ abilities to explore and compare graphs, draw conclusions from poorly classified images, and to design a future visual analytic environment. The datasets and submissions are archived in the Visual Analytics Benchmark Repository (https://www.cs.umd.edu/hcil/varepository/), and papers for several submissions are published as part of the VIS USB proceedings. This year’s submissions illustrate the power of combining machine learning and interactive visualization to gain insight into complex problems.","PeriodicalId":244967,"journal":{"name":"2020 IEEE Conference on Visual Analytics Science and Technology (VAST)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122020555","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
S. Bae, Federico Rossi, J. V. Hook, Scott Davidoff, K. Ma
{"title":"A Visual Analytics Approach to Debugging Cooperative, Autonomous Multi-Robot Systems’ Worldviews","authors":"S. Bae, Federico Rossi, J. V. Hook, Scott Davidoff, K. Ma","doi":"10.1109/VAST50239.2020.00008","DOIUrl":"https://doi.org/10.1109/VAST50239.2020.00008","url":null,"abstract":"Autonomous multi-robot systems, where a team of robots shares information to perform tasks that are beyond an individual robot’s abilities, hold great promise for a number of applications, such as planetary exploration missions. Each robot in a multi-robot system that uses the shared-world coordination paradigm autonomously schedules which robot should perform a given task, and when, using its worldview–the robot’s internal representation of its belief about both its own state, and other robots’ states. A key problem for operators is that robots’ worldviews can fall out of sync (often due to weak communication links), leading to desynchronization of the robots’ scheduling decisions and inconsistent emergent behavior (e.g., tasks not performed, or performed by multiple robots). Operators face the time-consuming and difficult task of making sense of the robots’ scheduling decisions, detecting de-synchronizations, and pinpointing the cause by comparing every robot’s worldview. To address these challenges, we introduce MOSAIC Viewer, a visual analytics system that helps operators (i) make sense of the robots’ schedules and (ii) detect and conduct a root cause analysis of the robots’ desynchronized worldviews. Over a year-long partnership with roboticists at the NASA Jet Propulsion Laboratory, we conduct a formative study to identify the necessary system design requirements and a qualitative evaluation with 12 roboticists. We find that MOSAIC Viewer is faster- and easier-to-use than the users’ current approaches, and it allows them to stitch low-level details to formulate a high-level understanding of the robots’ schedules and detect and pinpoint the cause of the desynchronized worldviews.","PeriodicalId":244967,"journal":{"name":"2020 IEEE Conference on Visual Analytics Science and Technology (VAST)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129881838","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"SilkViser: A Visual Explorer of Blockchain-based Cryptocurrency Transaction Data","authors":"Zengsheng Zhong, Shuirun Wei, Yeting Xu, Ying Zhao, Fangfang Zhou, Feng Luo, Ronghua Shi","doi":"10.1109/VAST50239.2020.00014","DOIUrl":"https://doi.org/10.1109/VAST50239.2020.00014","url":null,"abstract":"Many blockchain-based cryptocurrencies provide users with online blockchain explorers for viewing online transaction data. However, traditional blockchain explorers mostly present transaction information in textual and tabular forms. Such forms make understanding cryptocurrency transaction mechanisms difficult for novice users (NUsers). They are also insufficiently informative for experienced users (EUsers) to recognize advanced transaction information. This study introduces a new online cryptocurrency transaction data viewing tool called SilkViser. Guided by detailed scenario and requirement analyses, we create a series of appreciating visualization designs, such as paper ledger-inspired block and blockchain visualizations and ancient copper coin-inspired transaction visualizations, to help users understand cryptocurrency transaction mechanisms and recognize advanced transaction information. We also provide a set of lightweight interactions to facilitate easy and free data exploration. Moreover, a controlled user study is conducted to quantitatively evaluate the usability and effectiveness of SilkViser. Results indicate that SilkViser can satisfy the requirements of NUsers and EUsers. Our visualization designs can compensate for the inexperience of NUsers in data viewing and attract potential users to participate in cryptocurrency transactions.","PeriodicalId":244967,"journal":{"name":"2020 IEEE Conference on Visual Analytics Science and Technology (VAST)","volume":"58 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131530410","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Guizhen Wang, Jingjing Guo, Mingjie Tang, J. Q. Neto, Calvin Yau, Anas Daghistani, M. Karimzadeh, Walid G. Aref, D. Ebert
{"title":"STULL: Unbiased Online Sampling for Visual Exploration of Large Spatiotemporal Data","authors":"Guizhen Wang, Jingjing Guo, Mingjie Tang, J. Q. Neto, Calvin Yau, Anas Daghistani, M. Karimzadeh, Walid G. Aref, D. Ebert","doi":"10.1109/VAST50239.2020.00012","DOIUrl":"https://doi.org/10.1109/VAST50239.2020.00012","url":null,"abstract":"Online sampling-supported visual analytics is increasingly important, as it allows users to explore large datasets with acceptable approximate answers at interactive rates. However, existing online spatiotemporal sampling techniques are often biased, as most researchers have primarily focused on reducing computational latency. Biased sampling approaches select data with unequal probabilities and produce results that do not match the exact data distribution, leading end users to incorrect interpretations. In this paper, we propose a novel approach to perform unbiased online sampling of large spatiotemporal data. The proposed approach ensures the same probability of selection to every point that qualifies the specifications of a user’s multidimensional query. To achieve unbiased sampling for accurate representative interactive visualizations, we design a novel data index and an associated sample retrieval plan. Our proposed sampling approach is suitable for a wide variety of visual analytics tasks, e.g., tasks that run aggregate queries of spatiotemporal data. Extensive experiments confirm the superiority of our approach over a state-of-the-art spatial online sampling technique, demonstrating that within the same computational time, data samples generated in our approach are at least 50% more accurate in representing the actual spatial distribution of the data and enable approximate visualizations to present closer visual appearances to the exact ones.","PeriodicalId":244967,"journal":{"name":"2020 IEEE Conference on Visual Analytics Science and Technology (VAST)","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127102993","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Takanori Fujiwara, Jian Zhao, Francine Chen, K. Ma
{"title":"A Visual Analytics Framework for Contrastive Network Analysis","authors":"Takanori Fujiwara, Jian Zhao, Francine Chen, K. Ma","doi":"10.1109/VAST50239.2020.00010","DOIUrl":"https://doi.org/10.1109/VAST50239.2020.00010","url":null,"abstract":"A common network analysis task is comparison of two networks to identify unique characteristics in one network with respect to the other. For example, when comparing protein interaction networks derived from normal and cancer tissues, one essential task is to discover protein-protein interactions unique to cancer tissues. However, this task is challenging when the networks contain complex structural (and semantic) relations. To address this problem, we design ContraNA, a visual analytics framework leveraging both the power of machine learning for uncovering unique characteristics in networks and also the effectiveness of visualization for understanding such uniqueness. The basis of ContraNA is cNRL, which integrates two machine learning schemes, network representation learning (NRL) and contrastive learning (CL), to generate a low-dimensional embedding that reveals the uniqueness of one network when compared to another. ContraNA provides an interactive visualization interface to help analyze the uniqueness by relating embedding results and network structures as well as explaining the learned features by cNRL. We demonstrate the usefulness of ContraNA with two case studies using real-world datasets. We also evaluate ContraNA through a controlled user study with 12 participants on network comparison tasks. The results show that participants were able to both effectively identify unique characteristics from complex networks and interpret the results obtained from cNRL.","PeriodicalId":244967,"journal":{"name":"2020 IEEE Conference on Visual Analytics Science and Technology (VAST)","volume":"87 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130450543","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jiazhi Xia, Tianxiang Chen, Lei Zhang, Wei Chen, Yang Chen, X. Zhang, C. Xie, T. Schreck
{"title":"SMAP: A Joint Dimensionality Reduction Scheme for Secure Multi-Party Visualization","authors":"Jiazhi Xia, Tianxiang Chen, Lei Zhang, Wei Chen, Yang Chen, X. Zhang, C. Xie, T. Schreck","doi":"10.1109/VAST50239.2020.00015","DOIUrl":"https://doi.org/10.1109/VAST50239.2020.00015","url":null,"abstract":"Nowadays, as data becomes increasingly complex and distributed, data analyses often involve several related datasets that are stored on different servers and probably owned by different stakeholders. While there is an emerging need to provide these stakeholders with a full picture of their data under a global context, conventional visual analytical methods, such as dimensionality reduction, could expose data privacy when multi-party datasets are fused into a single site to build point-level relationships. In this paper, we reformulate the conventional t-SNE method from the single-site mode into a secure distributed infrastructure. We present a secure multi-party scheme for joint t-SNE computation, which can minimize the risk of data leakage. Aggregated visualization can be optionally employed to hide disclosure of point-level relationships. We build a prototype system based on our method, SMAP, to support the organization, computation, and exploration of secure joint embedding. We demonstrate the effectiveness of our approach with three case studies, one of which is based on the deployment of our system in real-world applications.","PeriodicalId":244967,"journal":{"name":"2020 IEEE Conference on Visual Analytics Science and Technology (VAST)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129914018","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"ConceptExplorer: Visual Analysis of Concept Drifts in Multi-source Time-series Data","authors":"Xumeng Wang, Wei Chen, Jiazhi Xia, Zexian Chen, Dongshi Xu, Xiangyang Wu, Mingliang Xu, T. Schreck","doi":"10.1109/VAST50239.2020.00006","DOIUrl":"https://doi.org/10.1109/VAST50239.2020.00006","url":null,"abstract":"Time-series data is widely studied in various scenarios, like weather forecast, stock market, customer behavior analysis. To comprehensively learn about the dynamic environments, it is necessary to comprehend features from multiple data sources. This paper proposes a novel visual analysis approach for detecting and analyzing concept drifts from multi-sourced time-series. We propose a visual detection scheme for discovering concept drifts from multiple sourced time-series based on prediction models. We design a drift level index to depict the dynamics, and a consistency judgment model to justify whether the concept drifts from various sources are consistent. Our integrated visual interface, ConceptExplorer, facilitates visual exploration, extraction, understanding, and comparison of concepts and concept drifts from multi-source time-series data. We conduct three case studies and expert interviews to verify the effectiveness of our approach.","PeriodicalId":244967,"journal":{"name":"2020 IEEE Conference on Visual Analytics Science and Technology (VAST)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130362730","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}