Alexander Vieth, A. Vilanova, B. Lelieveldt, E. Eisemann, T. Höllt
{"title":"Incorporating Texture Information into Dimensionality Reduction for High-Dimensional Images","authors":"Alexander Vieth, A. Vilanova, B. Lelieveldt, E. Eisemann, T. Höllt","doi":"10.1109/PacificVis53943.2022.00010","DOIUrl":"https://doi.org/10.1109/PacificVis53943.2022.00010","url":null,"abstract":"High-dimensional imaging is becoming increasingly relevant in many fields from astronomy and cultural heritage to systems biology. Visual exploration of such high-dimensional data is commonly facilitated by dimensionality reduction. However, common dimensionality reduction methods do not include spatial information present in images, such as local texture features, into the construction of low-dimensional embeddings. Consequently, exploration of such data is typically split into a step focusing on the attribute space followed by a step focusing on spatial information, or vice versa. In this paper, we present a method for incorporating spatial neighborhood information into distance-based dimensionality reduction methods, such as t-Distributed Stochastic Neighbor Embedding (t-SNE). We achieve this by modifying the distance measure between high-dimensional attribute vectors associated with each pixel such that it takes the pixel's spatial neighborhood into account. Based on a classification of different methods for comparing image patches, we explore a number of different approaches. We compare these approaches from a theoretical and experimental point of view. Finally, we illustrate the value of the proposed methods by qualitative and quantitative evaluation on synthetic data and two real-world use cases.","PeriodicalId":117284,"journal":{"name":"2022 IEEE 15th Pacific Visualization Symposium (PacificVis)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-02-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129782722","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yun-Hsin Kuo, Takanori Fujiwara, C. Chou, Chun Chen, K. Ma
{"title":"A Machine-learning-Aided Visual Analysis Workflow for Investigating Air Pollution Data","authors":"Yun-Hsin Kuo, Takanori Fujiwara, C. Chou, Chun Chen, K. Ma","doi":"10.1109/PacificVis53943.2022.00018","DOIUrl":"https://doi.org/10.1109/PacificVis53943.2022.00018","url":null,"abstract":"Analyzing air pollution data is challenging as there are various analysis focuses from different aspects: feature (what), space (where), and time (when). As in most geospatial analysis problems, besides high-dimensional features, the temporal and spatial dependencies of air pollution induce the complexity of performing analysis. Machine learning methods, such as dimensionality reduction, can extract and summarize important information of the data to lift the burden of understanding such a complicated environment. In this paper, we present a methodology that utilizes multiple machine learning methods to uniformly explore these aspects. With this methodology, we develop a visual analytic system that supports a flexible analysis workflow, allowing domain experts to freely explore different aspects based on their analysis needs. We demonstrate the capability of our system and analysis workflow supporting a variety of analysis tasks with multiple use cases.","PeriodicalId":117284,"journal":{"name":"2022 IEEE 15th Pacific Visualization Symposium (PacificVis)","volume":"78 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-02-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126920174","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"News Kaleidoscope: Visual Investigation of Coverage Diversity in News Event Reporting","authors":"Aditi Mishra, Shashank Ginjpalli, Chris Bryan","doi":"10.1109/PacificVis53943.2022.00022","DOIUrl":"https://doi.org/10.1109/PacificVis53943.2022.00022","url":null,"abstract":"When a newsworthy event occurs, media articles that report on the event can vary widely-a concept known as coverage diversity. To help investigate coverage diversity in event reporting, we de-velop a visual analytics system called News Kaleidoscope. News Kaleidoscope combines several backend language processing techniques with a coordinated visualization interface. Notably, News Kaleidoscope is tailored for visualization non-experts, and adopts an analytic workflow based around subselection analysis, whereby second-level features of articles are extracted to provide a more detailed and nuanced analysis of coverage diversity. To robustly evaluate News Kaleidoscope, we conduct a trio of user studies. (1) A study with news experts assesses the insights promoted for our targeted journalism-savvy users. (2) A follow-up study with news novices assesses the overall system and the specific insights pro-moted for journalism-agnostic users. (3) Based on identified system limitations in these two studies, we refine News Kaleidoscope's design and conduct a third study to validate these improvements. Results indicate that, for both news novice and experts, News Kalei-doscope supports an effective, task-driven workflow for analyzing the diversity of news coverage about events, though journalism expertise has a significant influence on the user's insights and take-aways. Our insights developing and evaluating News Kaleidoscope can aid future tools that combine visualization with natural language processing to analyze coverage diversity in news event reporting.","PeriodicalId":117284,"journal":{"name":"2022 IEEE 15th Pacific Visualization Symposium (PacificVis)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130383535","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Aditi Mishra, Utkarsh Soni, Jinbin Huang, Chris Bryan
{"title":"Why? Why not? When? Visual Explanations of Agent Behaviour in Reinforcement Learning","authors":"Aditi Mishra, Utkarsh Soni, Jinbin Huang, Chris Bryan","doi":"10.1109/PacificVis53943.2022.00020","DOIUrl":"https://doi.org/10.1109/PacificVis53943.2022.00020","url":null,"abstract":"Reinforcement learning (RL) is used in many domains, including autonomous driving, robotics, stock trading, and video games. Unfortunately, the black box nature of RL agents, combined with legal and ethical considerations, makes it increasingly important that humans (including those are who not experts in RL) understand the reasoning behind the actions taken by an RL agent, particularly in safety-critical domains. To help address this challenge, we introduce PolicyExplainer, a visual analytics interface which lets the user directly query an autonomous agent. PolicyExplainer visualizes the states, policy, and expected future rewards for an agent, and supports asking and answering questions such as: “Why take this action? Why not take this other action? When is this action taken?” PolicyExplainer is designed based upon a domain analysis with RL researchers, and is evaluated via qualitative and quantitative assessments on a trio of domains: taxi navigation, a stack bot domain, and drug recommendation for HIV patients. We find that PolicyExplainer's visual approach promotes trust and understanding of agent decisions better than a state-of-the-art text-based explanation approach. Interviews with domain practitioners provide further validation for PolicyExplainer as applied to safety-critical domains. Our results help demonstrate how visualization-based approaches can be leveraged to decode the behavior of autonomous RL agents, particularly for RL non-experts.","PeriodicalId":117284,"journal":{"name":"2022 IEEE 15th Pacific Visualization Symposium (PacificVis)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121600943","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}