{"title":"Either Or: Interactive Articles or Videos for Climate Science Communication","authors":"J. Poehls, M. Meuschke, N. Carvalhais, K. Lawonn","doi":"10.1111/cgf.70129","DOIUrl":"https://doi.org/10.1111/cgf.70129","url":null,"abstract":"<div>\u0000 \u0000 <p>Effective communication of climate science is critical as climate-related disasters become more frequent and severe. Translating complex information, such as uncertainties in climate model predictions, into formats accessible to diverse audiences is key to informed decision-making and public engagement. This study investigates how different teaching formats can enhance understanding of these uncertainties. This study compares two multimodal strategies: (1) a text-image format with interactive components and (2) an explainer video combining dynamic visuals with narration. Participants' immediate and delayed retention (one week) and engagement are assessed to determine which format offers greater saliency.</p>\u0000 <p>Sample analysis (n = <i>622</i>) displayed equivalent retention by viewers between both formats. Metrics assessing interactivity found no correlation between interactivity and information retention. However, a stark contrast was observed in the time viewers spent engaging with each format. The video format was 29% more efficient with information taught over a period of time vs. the article. Additionally, retention on the video format worsened with age (P = <i>0.004</i>) while retention on the article format improved with education (P = <i>0.038</i>). These results align with previous findings in literature.</p>\u0000 </div>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"44 3","pages":""},"PeriodicalIF":2.7,"publicationDate":"2025-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cgf.70129","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144681624","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Rita Sevastjanova, Robin Gerling, Thilo Spinner, Mennatallah El-Assady
{"title":"LayerFlow: Layer-wise Exploration of LLM Embeddings using Uncertainty-aware Interlinked Projections","authors":"Rita Sevastjanova, Robin Gerling, Thilo Spinner, Mennatallah El-Assady","doi":"10.1111/cgf.70123","DOIUrl":"https://doi.org/10.1111/cgf.70123","url":null,"abstract":"<p>Large language models (LLMs) represent words through contextual word embeddings encoding different language properties like semantics and syntax. Understanding these properties is crucial, especially for researchers investigating language model capabilities, employing embeddings for tasks related to text similarity, or evaluating the reasons behind token importance as measured through attribution methods. Applications for embedding exploration frequently involve dimensionality reduction techniques, which reduce high-dimensional vectors to two dimensions used as coordinates in a scatterplot. This data transformation step introduces uncertainty that can be propagated to the visual representation and influence users' interpretation of the data. To communicate such uncertainties, we present <b>LayerFlow</b> – a visual analytics workspace that displays embeddings in an interlinked projection design and communicates the transformation, representation, and interpretation uncertainty. In particular, to hint at potential data distortions and uncertainties, the workspace includes several visual components, such as convex hulls showing 2D and HD clusters, data point pairwise distances, cluster summaries, and projection quality metrics. We show the usability of the presented workspace through replication and expert case studies that highlight the need to communicate uncertainty through multiple visual components and different data perspectives.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"44 3","pages":""},"PeriodicalIF":2.7,"publicationDate":"2025-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144681335","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xingdi Zhang, Peter Rautek, Thomas Theußl, Markus Hadwiger
{"title":"Enhancing Material Boundary Visualizations in 2D Unsteady Flow through Local Reference Frame Transformations","authors":"Xingdi Zhang, Peter Rautek, Thomas Theußl, Markus Hadwiger","doi":"10.1111/cgf.70128","DOIUrl":"https://doi.org/10.1111/cgf.70128","url":null,"abstract":"<p>We present a novel technique for the extraction, visualization, and analysis of material boundaries and Lagrangian coherent structures (LCS) in 2D unsteady flow fields relative to local reference frame transformations. In addition to the input flow field, we leverage existing methods for computing reference frames adapted to local fluid features, in particular those that minimize the observed time derivative. Although, by definition, transforming objective tensor fields between reference frames does not change the tensor field, we show that transforming objective tensors, such as the finite-time Lyapunov exponent (FTLE) or Lagrangian-averaged vorticity deviation (LAVD), or the second-order rate-of-strain tensor, into local reference frames that are naturally adapted to coherent fluid structures has several advantages: (1) The transformed fields enable analyzing LCS in space-time visualizations that are adapted to each structure; (2) They facilitate extracting geometric features, such as iso-surfaces and ridge lines, in a straightforward manner with high accuracy. The resulting visualizations are characterized by lower geometric complexity and enhanced topological fidelity. To demonstrate the effectiveness of our technique, we measure geometric complexity and compare it with iso-surfaces extracted in the conventional reference frame. We show that the decreased geometric complexity of the iso-surfaces in the local reference frame, not only leads to improved geometric and topological results, but also to a decrease in computation time.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"44 3","pages":""},"PeriodicalIF":2.7,"publicationDate":"2025-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144681311","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Benchmarking Visual Language Models on Standardized Visualization Literacy Tests","authors":"Saugat Pandey, Alvitta Ottley","doi":"10.1111/cgf.70137","DOIUrl":"https://doi.org/10.1111/cgf.70137","url":null,"abstract":"<p>The increasing integration of Visual Language Models (VLMs) into visualization systems demands a comprehensive understanding of their visual interpretation capabilities and constraints. While existing research has examined individual models, systematic comparisons of VLMs' visualization literacy remain unexplored. We bridge this gap through a rigorous, first-of-its-kind evaluation of four leading VLMs (GPT-4, Claude, Gemini, and Llama) using standardized assessments: the Visualization Literacy Assessment Test (VLAT) and Critical Thinking Assessment for Literacy in Visualizations (CALVI). Our methodology uniquely combines randomized trials with structured prompting techniques to control for order effects and response variability - a critical consideration overlooked in many VLM evaluations. Our analysis reveals that while specific models demonstrate competence in basic chart interpretation (Claude achieving 67.9% accuracy on VLAT), all models exhibit substantial difficulties in identifying misleading visualization elements (maximum 30.0% accuracy on CALVI). We uncover distinct performance patterns: strong capabilities in interpreting conventional charts like line charts (76-96% accuracy) and detecting hierarchical structures (80-100% accuracy), but consistent difficulties with data-dense visualizations involving multiple encodings (bubble charts: 18.6-61.4%) and anomaly detection (25-30% accuracy). Significantly, we observe distinct uncertainty management behavior across models, with Gemini displaying heightened caution (22.5% question omission) compared to others (7-8%). These findings provide crucial insights for the visualization community by establishing reliable VLM evaluation benchmarks, identifying areas where current models fall short, and highlighting the need for targeted improvements in VLM architectures for visualization tasks. To promote reproducibility, encourage further research, and facilitate benchmarking of future VLMs, our complete evaluation framework, including code, prompts, and analysis scripts, is available at https://github.com/washuvis/VisLit-VLM-Eval.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"44 3","pages":""},"PeriodicalIF":2.7,"publicationDate":"2025-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144681333","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"SurpriseExplora: Tuning and Contextualizing Model-derived Maps with Interactive Visualizations","authors":"A. Ndlovu, H. Shrestha, E. Peck, L. Harrison","doi":"10.1111/cgf.70114","DOIUrl":"https://doi.org/10.1111/cgf.70114","url":null,"abstract":"<p>People craft choropleth maps to monitor, analyze, and understand spatially distributed data. Recent visualization work has addressed several known biases in choropleth maps by developing new model- and metrics- based approaches (e.g. Bayesian surprise). However, effective use of these techniques requires extensive parameter setting and tuning, making them difficult or impossible for users without substantial technical skills. In this paper we describe SurpriseExplora, which addresses this gap through direct manipulation techniques for re-targeting a Bayesian surprise model's scope and parameters. We present three use cases to illustrate the capabilities of SurpriseExplora, showing for example how models calculated at a national level can obscure key findings that can be revealed through interaction sequences common to map visualizations (e.g. zooming), and how augmenting funnel-plot visualizations with interactions that adjust underlying models can account for outliers or skews in spatial datasets. We evaluate SurpriseExplora through an expert review with visualization researchers and practitioners. We conclude by discussing how SurpriseExplora uncovers new opportunities for sense-making within the broader ecosystem of map visualizations, as well as potential empirical studies with non-expert populations.</p><p>Code and demo video available at https://osf.io/7m89w/</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"44 3","pages":""},"PeriodicalIF":2.7,"publicationDate":"2025-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144681337","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
H. Shrestha, K. Cachel, M. Alkhathlan, E. Rundensteiner, L. Harrison
{"title":"FairSpace: An Interactive Visualization System for Constructing Fair Consensus from Many Rankings","authors":"H. Shrestha, K. Cachel, M. Alkhathlan, E. Rundensteiner, L. Harrison","doi":"10.1111/cgf.70132","DOIUrl":"https://doi.org/10.1111/cgf.70132","url":null,"abstract":"<p>Decisions involving algorithmic rankings affect our lives in many ways, from product recommendations, receiving scholarships, to securing jobs. While tools have been developed for interactively constructing fair consensus rankings from a handful of rankings, addressing the more complex real-world scenario— where diverse opinions are represented by a larger collection of rankings— remains a challenge. In this paper, we address these challenges by reformulating the exploration of rankings as a dimension reduction problem in a system called FairSpace. FairSpace provides new views, including Fair Divergence View and Cluster Views, by juxtaposing fairness metrics of different local and alternative global consensus rankings to aid ranking analysis tasks. We illustrate the effectiveness of FairSpace through a series of use cases, demonstrating via interactive workflows that users are empowered to create local consensuses by grouping rankings similar in their fairness or utility properties, followed by hierarchically aggregating local consensuses into a global consensus through direct manipulation. We discuss how FairSpace opens the possibility for advances in dimension reduction visualization to benefit the research area of supporting fair decision-making in ranking based decision-making contexts.</p><p>Code, datasets and demo video available at: osf.io/d7cwk</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"44 3","pages":""},"PeriodicalIF":2.7,"publicationDate":"2025-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144681338","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Viewpoint Optimization for 3D Graph Drawings","authors":"S. van Wageningen, T. Mchedlidze, A. Telea","doi":"10.1111/cgf.70127","DOIUrl":"https://doi.org/10.1111/cgf.70127","url":null,"abstract":"<div>\u0000 \u0000 <p>Graph drawings using a node-link metaphor and straight edges are widely used to represent and understand relational data. While such drawings are typically created in 2D, 3D representations have also gained popularity. When exploring 3D drawings, finding viewpoints that help understanding the graph's structure is crucial. Finding good viewpoints also allows using the 3D drawings to generate good 2D graph drawings. In this work, we tackle the problem of automatically finding high-quality viewpoints for 3D graph drawings. We propose and evaluate strategies based on sampling, gradient descent, and evolutionary-inspired meta-heuristics. Our results show that most strategies quickly converge to high-quality viewpoints within a few dozen function evaluations, with meta-heuristic approaches showing robust performance regardless of the quality metric.</p>\u0000 </div>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"44 3","pages":""},"PeriodicalIF":2.7,"publicationDate":"2025-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cgf.70127","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144681407","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"VizTA: Enhancing Comprehension of Distributional Visualization with Visual-Lexical Fused Conversational Interface","authors":"Liangwei Wang, Zhan Wang, Shishi Xiao, Le Liu, Fugee Tsung, Wei Zeng","doi":"10.1111/cgf.70110","DOIUrl":"https://doi.org/10.1111/cgf.70110","url":null,"abstract":"<p>Comprehending visualizations requires readers to interpret visual encoding and the underlying meanings actively. This poses challenges for visualization novices, particularly when interpreting distributional visualizations that depict statistical uncertainty. Advancements in LLM-based conversational interfaces show promise in promoting visualization comprehension. However, they fail to provide contextual explanations at fine-grained granularity, and chart readers are still required to mentally bridge visual information and textual explanations during conversations. Our formative study highlights the expectations for both lexical and visual feedback, as well as the importance of explicitly linking these two modalities throughout the conversation. The findings motivate the design of <i>V<span>iz</span>TA</i>, a visualization teaching assistant that leverages the fusion of visual and lexical feedback to help readers better comprehend visualization. <i>V<span>iz</span>TA</i> features a semantic-aware conversational agent capable of explaining contextual information within visualizations and employs a visual-lexical fusion design to facilitate chart-centered conversation. A between-subject study with 24 participants demonstrates the effectiveness of <i>V<span>iz</span>TA</i> in supporting the understanding and reasoning tasks of distributional visualization across multiple scenarios.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"44 3","pages":""},"PeriodicalIF":2.7,"publicationDate":"2025-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144681625","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Johannes Eschner, Roberto Labadie-Tamayo, Matthias Zeppelzauer, Manuela Waldner
{"title":"Interactive Discovery and Exploration of Visual Bias in Generative Text-to-Image Models","authors":"Johannes Eschner, Roberto Labadie-Tamayo, Matthias Zeppelzauer, Manuela Waldner","doi":"10.1111/cgf.70135","DOIUrl":"https://doi.org/10.1111/cgf.70135","url":null,"abstract":"<div>\u0000 \u0000 <p>Bias in generative Text-to-Image (T2I) models is a known issue, yet systematically analyzing such models' outputs to uncover it remains challenging. We introduce the Visual Bias Explorer (ViBEx) to interactively explore the output space of T2I models to support the discovery of visual bias. ViBEx introduces a novel flexible prompting tree interface in combination with zero-shot bias probing using CLIP for quick and approximate bias exploration. It additionally supports in-depth confirmatory bias analysis through visual inspection of forward, intersectional, and inverse bias queries. ViBEx is model-agnostic and publicly available. In four case study interviews, experts in AI and ethics were able to discover visual biases that have so far not been described in literature.</p>\u0000 </div>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"44 3","pages":""},"PeriodicalIF":2.7,"publicationDate":"2025-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cgf.70135","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144681626","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xinyuan Yan, Xiwei Xuan, Jorge Piazentin Ono, Jiajing Guo, Vikram Mohanty, Shekar Arvind Kumar, Liang Gou, Bei Wang, Liu Ren
{"title":"VISLIX: An XAI Framework for Validating Vision Models with Slice Discovery and Analysis","authors":"Xinyuan Yan, Xiwei Xuan, Jorge Piazentin Ono, Jiajing Guo, Vikram Mohanty, Shekar Arvind Kumar, Liang Gou, Bei Wang, Liu Ren","doi":"10.1111/cgf.70125","DOIUrl":"https://doi.org/10.1111/cgf.70125","url":null,"abstract":"<p>Real-world machine learning models require rigorous evaluation before deployment, especially in safety-critical domains like autonomous driving and surveillance. The evaluation of machine learning models often focuses on data slices, which are subsets of the data that share a set of characteristics. Data slice finding automatically identifies conditions or data subgroups where models underperform, aiding developers in mitigating performance issues. Despite its popularity and effectiveness, data slicing for vision model validation faces several challenges. First, data slicing often needs additional image metadata or visual concepts, and falls short in certain computer vision tasks, such as object detection. Second, understanding data slices is a labor-intensive and mentally demanding process that heavily relies on the expert's domain knowledge. Third, data slicing lacks a human-in-the-loop solution that allows experts to form hypothesis and test them interactively. To overcome these limitations and better support the machine learning operations lifecycle, we introduce <i>VISLIX</i>, a novel visual analytics framework that employs state-of-the-art foundation models to help domain experts analyze slices in computer vision models. Our approach does not require image metadata or visual concepts, automatically generates natural language insights, and allows users to test data slice hypothesis interactively. We evaluate <i>VISLIX</i> with an expert study and three use cases, that demonstrate the effectiveness of our tool in providing comprehensive insights for validating object detection models.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"44 3","pages":""},"PeriodicalIF":2.7,"publicationDate":"2025-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144681416","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}