{"title":"VMC: A Grammar for Visualizing Statistical Model Checks","authors":"Ziyang Guo;Alex Kale;Matthew Kay;Jessica Hullman","doi":"10.1109/TVCG.2024.3456402","DOIUrl":"10.1109/TVCG.2024.3456402","url":null,"abstract":"Visualizations play a critical role in validating and improving statistical models. However, the design space of model check visualizations is not well understood, making it difficult for authors to explore and specify effective graphical model checks. VMC defines a model check visualization using four components: (1) samples of distributions of checkable quantities generated from the model, including predictive distributions for new data and distributions of model parameters; (2) transformations on observed data to facilitate comparison; (3) visual representations of distributions; and (4) layouts to facilitate comparing model samples and observed data. We contribute an implementation of VMC as an R package. We validate VMC by reproducing a set of canonical model check examples, and show how using VMC to generate model checks reduces the edit distance between visualizations relative to existing visualization toolkits. The findings of an interview study with three expert modelers who used VMC highlight challenges and opportunities for encouraging exploration of correct, effective model check visualizations.","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"31 1","pages":"798-808"},"PeriodicalIF":0.0,"publicationDate":"2024-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142335191","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Large Language Models for Transforming Categorical Data to Interpretable Feature Vectors.","authors":"Karim Huesmann, Lars Linsen","doi":"10.1109/TVCG.2024.3460652","DOIUrl":"https://doi.org/10.1109/TVCG.2024.3460652","url":null,"abstract":"<p><p>When analyzing heterogeneous data comprising numerical and categorical attributes, it is common to treat the different data types separately or transform the categorical attributes to numerical ones. The transformation has the advantage of facilitating an integrated multi-variate analysis of all attributes. We propose a novel technique for transforming categorical data into interpretable numerical feature vectors using Large Language Models (LLMs). The LLMs are used to identify the categorical attributes' main characteristics and assign numerical values to these characteristics, thus generating a multi-dimensional feature vector. The transformation can be computed fully automatically, but due to the interpretability of the characteristics, it can also be adjusted intuitively by an end user. We provide a respective interactive tool that aims to validate and possibly improve the AI-generated outputs. Having transformed a categorical attribute, we propose novel methods for ordering and color-coding the categories based on the similarities of the feature vectors.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142335187","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jiaming Xie, Congyi Zhang, Guangshun Wei, Peng Wang, Guodong Wei, Wenxi Liu, Min Gu, Ping Luo, Wenping Wang
{"title":"Tooth Motion Monitoring in Orthodontic Treatment by Mobile Device-based Multi-view Stereo.","authors":"Jiaming Xie, Congyi Zhang, Guangshun Wei, Peng Wang, Guodong Wei, Wenxi Liu, Min Gu, Ping Luo, Wenping Wang","doi":"10.1109/TVCG.2024.3470992","DOIUrl":"https://doi.org/10.1109/TVCG.2024.3470992","url":null,"abstract":"<p><p>Nowadays, orthodontics has become an important part of modern personal life to assist one in improving mastication and raising self-esteem. However, the quality of orthodontic treatment still heavily relies on the empirical evaluation of experienced doctors, which lacks quantitative assessment and requires patients to visit clinics frequently for in-person examination. To resolve the aforementioned problem, we propose a novel and practical mobile device-based framework for precisely measuring tooth movement in treatment, so as to simplify and strengthen the traditional tooth monitoring process. To this end, we formulate the tooth movement monitoring task as a multi-view multi-object pose estimation problem via different views that capture multiple texture-less and severely occluded objects (i.e. teeth). Specifically, we exploit a pre-scanned 3D tooth model and a sparse set of multi-view tooth images as inputs for our proposed tooth monitoring framework. After extracting tooth contours and localizing the initial camera pose of each view from the initial configuration, we propose a joint pose estimation scheme to precisely estimate the 3D pose of each individual tooth, so as to infer their relative offsets during treatment. Furthermore, we introduce the metric of Relative Pose Bias to evaluate the individual tooth pose accuracy in a small scale. We demonstrate that our approach is capable of reaching high accuracy and efficiency as practical orthodontic treatment monitoring requires.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142335190","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Wilson E Marcilio-Jr, Danilo M Eler, Fernando V Paulovich, Rafael M Martins
{"title":"HUMAP: Hierarchical Uniform Manifold Approximation and Projection.","authors":"Wilson E Marcilio-Jr, Danilo M Eler, Fernando V Paulovich, Rafael M Martins","doi":"10.1109/TVCG.2024.3471181","DOIUrl":"10.1109/TVCG.2024.3471181","url":null,"abstract":"<p><p>Dimensionality reduction (DR) techniques help analysts to understand patterns in high-dimensional spaces. These techniques, often represented by scatter plots, are employed in diverse science domains and facilitate similarity analysis among clusters and data samples. For datasets containing many granularities or when analysis follows the information visualization mantra, hierarchical DR techniques are the most suitable approach since they present major structures beforehand and details on demand. This work presents HUMAP, a novel hierarchical dimensionality reduction technique designed to be flexible on preserving local and global structures and preserve the mental map throughout hierarchical exploration. We provide empirical evidence of our technique's superiority compared with current hierarchical approaches and show a case study applying HUMAP for dataset labelling.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142335185","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Patrick Adelberger, Oleg Lesota, Klaus Eckelt, Markus Schedl, Marc Streit
{"title":"Iguanodon: A Code-Breaking Game for Improving Visualization Construction Literacy.","authors":"Patrick Adelberger, Oleg Lesota, Klaus Eckelt, Markus Schedl, Marc Streit","doi":"10.1109/TVCG.2024.3468948","DOIUrl":"10.1109/TVCG.2024.3468948","url":null,"abstract":"<p><p>In today's data-rich environment, visualization literacy-the ability to understand and communicate information through charts-is increasingly important. However, constructing effective charts can be challenging due to the numerous design choices involved. Off-the-shelf systems and libraries produce charts with carefully selected defaults that users may not be aware of, making it hard to increase their visualization literacy with those systems. In addition, traditional ways of improving visualization literacy, such as textbooks and tutorials, can be burdensome as they require sifting through a plethora of resources. To address this challenge, we designed Iguanodon, an easy-to-use game application that complements the traditional methods of improving visualization construction literacy. In our game application, users interactively choose whether to apply design choices, which we assign to sub-tasks that must be optimized to create an effective chart. The application offers multiple game variations to help users learn how different design choices should be applied to construct effective charts. Furthermore, our approach easily adapts to different visualization design guidelines. We describe the application's design and present the results of a user study with 37 participants. Our findings indicate that our game-based approach supports users in improving their visualization literacy.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142335186","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jonathan W Kelly, Taylor A Doty, Stephen B Gilbert, Michael C Dorneich
{"title":"Field of View Restriction and Snap Turning as Cybersickness Mitigation Tools.","authors":"Jonathan W Kelly, Taylor A Doty, Stephen B Gilbert, Michael C Dorneich","doi":"10.1109/TVCG.2024.3470214","DOIUrl":"https://doi.org/10.1109/TVCG.2024.3470214","url":null,"abstract":"<p><p>Multiple tools are available to reduce cybersickness (sickness caused by virtual reality), but past research has not investigated the combined effects of multiple mitigation tools. Field of view (FOV) restriction limits peripheral vision during self-motion, and ample evidence supports its effectiveness for reducing cybersickness. Snap turning involves discrete rotations of the user's perspective without presenting intermediate views, although reports on its effectiveness at reducing cybersickness are limited and equivocal. Both mitigation tools reduce the visual motion that can cause cybersickness. The current study (N = 201) investigated the individual and combined effects of FOV restriction and snap turning on cybersickness when playing a consumer virtual reality game. FOV restriction and snap turning in isolation reduced cybersickness compared to a control condition without mitigation tools. Yet, the combination of FOV restriction and snap turning did not further reduce cybersickness beyond the individual tools in isolation, and in some cases the combination of tools led to cybersickness similar to that in the no mitigation control. These results indicate that caution is warranted when combining multiple cybersickness mitigation tools, which can interact in unexpected ways.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142335184","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yixuan Wang, Jieqiong Zhao, Jiayi Hong, Ronald G Askin, Ross Maciejewski
{"title":"A Simulation-based Approach for Quantifying the Impact of Interactive Label Correction for Machine Learning.","authors":"Yixuan Wang, Jieqiong Zhao, Jiayi Hong, Ronald G Askin, Ross Maciejewski","doi":"10.1109/TVCG.2024.3468352","DOIUrl":"https://doi.org/10.1109/TVCG.2024.3468352","url":null,"abstract":"<p><p>Recent years have witnessed growing interest in understanding the sensitivity of machine learning to training data characteristics. While researchers have claimed the benefits of activities such as a human-in-the-loop approach of interactive label correction for improving model performance, there have been limited studies to quantitatively probe the relationship between the cost of label correction and the associated benefit in model performance. We employ a simulation-based approach to explore the efficacy of label correction under diverse task conditions, namely different datasets, noise properties, and machine learning algorithms. We measure the impact of label correction on model performance under the best-case scenario assumption: perfect correction (perfect human and visual systems), serving as an upper-bound estimation of the benefits derived from visual interactive label correction. The simulation results reveal a trade-off between the label correction effort expended and model performance improvement. Notably, task conditions play a crucial role in shaping the trade-off. Based on the simulation results, we develop a set of recommendations to help practitioners determine conditions under which interactive label correction is an effective mechanism for improving model performance.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142335183","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zijun Zhou, Fan Tang, Yuxin Zhang, Oliver Deussen, Juan Cao, Weiming Dong, Xiangtao Li, Tong-Yee Lee
{"title":"A Comprehensive Evaluation of Arbitrary Image Style Transfer Methods.","authors":"Zijun Zhou, Fan Tang, Yuxin Zhang, Oliver Deussen, Juan Cao, Weiming Dong, Xiangtao Li, Tong-Yee Lee","doi":"10.1109/TVCG.2024.3466964","DOIUrl":"https://doi.org/10.1109/TVCG.2024.3466964","url":null,"abstract":"<p><p>Despite the remarkable process in the field of arbitrary image style transfer (AST), inconsistent evaluation continues to plague style transfer research. Existing methods often suffer from limited objective evaluation and inconsistent subjective feedback, hindering reliable comparisons among AST variants. In this study, we propose a multi-granularity assessment system that combines standardized objective and subjective evaluations. We collect a fine-grained dataset considering a range of image contexts such as different scenes, object complexities, and rich parsing information from multiple sources. Objective and subjective studies are conducted using the collected dataset. Specifically, we innovate on traditional subjective studies by developing an online evaluation system utilizing a combination of point-wise, pair-wise, and group-wise questionnaires. Finally, we bridge the gap between objective and subjective evaluations by examining the consistency between the results from the two studies. We experimentally evaluate CNN-based, flow-based, transformer-based, and diffusion-based AST methods by the proposed multi-granularity assessment system, which lays the foundation for a reliable and robust evaluation. Providing standardized measures, objective data, and detailed subjective feedback empowers researchers to make informed comparisons and drive innovation in this rapidly evolving field. Finally, for the collected dataset and our online evaluation system, please see http://ivc.ia.ac.cn.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142335182","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jaeyoung Kim;Sihyeon Lee;Hyeon Jeon;Keon-Joo Lee;Hee-Joon Bae;Bohyoung Kim;Jinwook Seo
{"title":"PhenoFlow: A Human-LLM Driven Visual Analytics System for Exploring Large and Complex Stroke Datasets","authors":"Jaeyoung Kim;Sihyeon Lee;Hyeon Jeon;Keon-Joo Lee;Hee-Joon Bae;Bohyoung Kim;Jinwook Seo","doi":"10.1109/TVCG.2024.3456215","DOIUrl":"10.1109/TVCG.2024.3456215","url":null,"abstract":"Acute stroke demands prompt diagnosis and treatment to achieve optimal patient outcomes. However, the intricate and irregular nature of clinical data associated with acute stroke, particularly blood pressure (BP) measurements, presents substantial obstacles to effective visual analytics and decision-making. Through a year-long collaboration with experienced neurologists, we developed PhenoFlow, a visual analytics system that leverages the collaboration between human and Large Language Models (LLMs) to analyze the extensive and complex data of acute ischemic stroke patients. PhenoFlow pioneers an innovative workflow, where the LLM serves as a data wrangler while neurologists explore and supervise the output using visualizations and natural language interactions. This approach enables neurologists to focus more on decision-making with reduced cognitive load. To protect sensitive patient information, PhenoFlow only utilizes metadata to make inferences and synthesize executable codes, without accessing raw patient data. This ensures that the results are both reproducible and interpretable while maintaining patient privacy. The system incorporates a slice-and-wrap design that employs temporal folding to create an overlaid circular visualization. Combined with a linear bar graph, this design aids in exploring meaningful patterns within irregularly measured BP data. Through case studies, PhenoFlow has demonstrated its capability to support iterative analysis of extensive clinical datasets, reducing cognitive load and enabling neurologists to make well-informed decisions. Grounded in long-term collaboration with domain experts, our research demonstrates the potential of utilizing LLMs to tackle current challenges in data-driven clinical decision-making for acute ischemic stroke patients.","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"31 1","pages":"470-480"},"PeriodicalIF":0.0,"publicationDate":"2024-09-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142335188","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Haoran Jiang;Shaohan Shi;Shuhao Zhang;Jie Zheng;Quan Li
{"title":"SLInterpreter: An Exploratory and Iterative Human-AI Collaborative System for GNN-Based Synthetic Lethal Prediction","authors":"Haoran Jiang;Shaohan Shi;Shuhao Zhang;Jie Zheng;Quan Li","doi":"10.1109/TVCG.2024.3456325","DOIUrl":"10.1109/TVCG.2024.3456325","url":null,"abstract":"Synthetic Lethal (SL) relationships, though rare among the vast array of gene combinations, hold substantial promise for targeted cancer therapy. Despite advancements in AI model accuracy, there is still a significant need among domain experts for interpretive paths and mechanism explorations that align better with domain-specific knowledge, particularly due to the high costs of experimentation. To address this gap, we propose an iterative Human-AI collaborative framework with two key components: 1) Human-Engaged Knowledge Graph Refinement based on Metapath Strategies, which leverages insights from interpretive paths and domain expertise to refine the knowledge graph through metapath strategies with appropriate granularity. 2) Cross-Granularity SL Interpretation Enhancement and Mechanism Analysis, which aids experts in organizing and comparing predictions and interpretive paths across different granularities, uncovering new SL relationships, enhancing result interpretation, and elucidating potential mechanisms inferred by Graph Neural Network (GNN) models. These components cyclically optimize model predictions and mechanism explorations, enhancing expert involvement and intervention to build trust. Facilitated by SLInterpreter, this framework ensures that newly generated interpretive paths increasingly align with domain knowledge and adhere more closely to real-world biological principles through iterative Human-AI collaboration. We evaluate the framework's efficacy through a case study and expert interviews.","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"31 1","pages":"919-929"},"PeriodicalIF":0.0,"publicationDate":"2024-09-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142335189","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}