{"title":"PrismBreak: Exploration of Multi-Dimensional Mixture Models","authors":"B. Zahoransky, T. Günther, K. Lawonn","doi":"10.1111/cgf.70121","DOIUrl":"https://doi.org/10.1111/cgf.70121","url":null,"abstract":"<div>\u0000 \u0000 <p>In data science, visual data exploration becomes increasingly more challenging due to the continued rapid increase of data dimensionality and data sizes. To manage complexity, two orthogonal approaches are commonly used in practice: First, data is frequently clustered in high-dimensional space by fitting mixture models composed of normal distributions or Student t-distributions. Second, dimensionality reduction is employed to embed high-dimensional point clouds in a two- or three-dimensional space. Those algorithms determine the spatial arrangement in low-dimensional space without further user interaction. This leaves little room for a guided exploration and data analysis. In this paper, we propose a novel visualization system for the effective exploration and construction of potential subspaces onto which mixture models can be projected. The subspaces are spanned linearly via basis vectors, for which a vast number of basis vector combinations is theoretically imaginable. Our system guides the user step-by-step through the selection process by letting users choose one basis vector at a time. To guide the process, multiple choices are pre-visualized at once on a multi-faceted prism. In addition to the qualitative visualization of the distributions, multiple quantitative metrics are calculated by which subspaces can be compared and reordered, including variance, sparsity, and visibility. Further, a bookmarking tool lets users record and compare different basis vector combinations. The usability of the system is evaluated by data scientists and is tested on several high-dimensional data sets.</p>\u0000 </div>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"44 3","pages":""},"PeriodicalIF":2.7,"publicationDate":"2025-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cgf.70121","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144681412","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
F. Becker, R. P. Warnking, H. Brückler, T. Blascheck
{"title":"Beyond Entertainment: An Investigation of Externalization Design in Video Games","authors":"F. Becker, R. P. Warnking, H. Brückler, T. Blascheck","doi":"10.1111/cgf.70124","DOIUrl":"https://doi.org/10.1111/cgf.70124","url":null,"abstract":"<div>\u0000 \u0000 <p>This article investigates when and how video games enable players to create externalizations in a diverse sample of 388 video games. We follow a grounded-theory approach, extracting externalizations from video games to explore design ideas and relate them to practices in visualization. Video games often engage players in problem-solving activities, like solving a murder mystery or optimizing a strategy, requiring players to interpret heterogeneous data—much like tasks in the visualization domain. In many cases, externalizations can help reduce a user's mental load by making tangible what otherwise only lives in their head, acting as external storage or a visual playground. Over five coding phases, we created a hierarchy of 277 tags to describe the video games in our collection, from which we extracted 169 externalizations. We characterize these externalizations along nine dimensions like mental load, visual encodings, and motivations, resulting in 13 categories divided into four clusters: quick access, storage, sensemaking, and communication. We formulate considerations to guide future work, looking at tasks and challenges, naming potentials for inspiration, and discussing which topics could advance the state of externalization.</p>\u0000 </div>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"44 3","pages":""},"PeriodicalIF":2.7,"publicationDate":"2025-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cgf.70124","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144681417","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. Li, C. Perin, J. Knibbe, G. Demartini, S. Viller, M. Cordeil
{"title":"Embedded and Situated Visualisation in Mixed Reality to Support Interval Running","authors":"A. Li, C. Perin, J. Knibbe, G. Demartini, S. Viller, M. Cordeil","doi":"10.1111/cgf.70133","DOIUrl":"https://doi.org/10.1111/cgf.70133","url":null,"abstract":"<div>\u0000 \u0000 <p>We investigate the use of mixed reality visualisations to help pace tracking for interval running. We introduce three immersive visual designs to support pace tracking. Our designs leverage two properties afforded by mixed reality environments to display information: the space in front of the user and the physical environment to embed pace visualisation. In this paper, we report on the first design exploration and controlled study of mixed reality technology to support pacing tracking during interval running on an outdoor running track. Our results show that mixed reality and immersive visualisation designs for interval training offer a viable option to help runners (a) maintain regular pace, (b) maintain running flow, and (c) reduce mental task load.</p>\u0000 </div>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"44 3","pages":""},"PeriodicalIF":2.7,"publicationDate":"2025-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cgf.70133","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144681418","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
D. Pahr, S. Di Bartolomeo, H. Ehlers, V. A. Filipov, C. Stoiber, W. Aigner, H.-Y. Wu, R. G. Raidou
{"title":"NODKANT: Exploring Constructive Network Physicalization","authors":"D. Pahr, S. Di Bartolomeo, H. Ehlers, V. A. Filipov, C. Stoiber, W. Aigner, H.-Y. Wu, R. G. Raidou","doi":"10.1111/cgf.70140","DOIUrl":"https://doi.org/10.1111/cgf.70140","url":null,"abstract":"<div>\u0000 \u0000 <p>Physicalizations, which combine perceptual and sensorimotor interactions, offer an immersive way to comprehend complex data visualizations by stimulating active construction and manipulation. This study investigates the impact of personal construction on the comprehension of physicalized networks. We propose a physicalization toolkit—<b>NODKANT</b>—for constructing modular node-link diagrams consisting of a magnetic surface, 3D printable and stackable node labels, and edges of adjustable length. In a mixed-methods between-subject lab study with 27 participants, three groups of people used <b>NODKANT</b> to complete a series of low-level analysis tasks in the context of an animal contact network. The first group was tasked with freely constructing their network using a sorted edge list, the second group received step-by-step instructions to create a predefined layout, and the third group received a pre-constructed representation. While free construction proved on average more time-consuming, we show that users extract more insights from the data during construction and interact with their representation more frequently, compared to those presented with step-by-step instructions. Interestingly, the increased time demand cannot be measured in users' subjective task load. Finally, our findings indicate that participants who constructed their own representations were able to recall more detailed insights after a period of 10–14 days compared to those who were given a pre-constructed network physicalization. All materials, data, code for generating instructions, and 3D printable meshes are available on https://osf.io/tk3g5/.</p>\u0000 </div>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"44 3","pages":""},"PeriodicalIF":2.7,"publicationDate":"2025-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cgf.70140","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144681312","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tark Patel, Tushar M. Athawale, Timbwaoga A. J. Ouermi, Chris R. Johnson
{"title":"Fast HARDI Uncertainty Quantification and Visualization with Spherical Sampling","authors":"Tark Patel, Tushar M. Athawale, Timbwaoga A. J. Ouermi, Chris R. Johnson","doi":"10.1111/cgf.70138","DOIUrl":"https://doi.org/10.1111/cgf.70138","url":null,"abstract":"<div>\u0000 \u0000 <p>In this paper, we study uncertainty quantification and visualization of orientation distribution functions (ODF), which corresponds to the diffusion profile of high angular resolution diffusion imaging (HARDI) data. The shape inclusion probability (SIP) function is the state-of-the-art method for capturing the uncertainty of ODF ensembles. The current method of computing the SIP function with a volumetric basis exhibits high computational and memory costs, which can be a bottleneck to integrating uncertainty into HARDI visualization techniques and tools. We propose a novel spherical sampling framework for faster computation of the SIP function with lower memory usage and increased accuracy. In particular, we propose direct extraction of SIP isosurfaces, which represent confidence intervals indicating spatial uncertainty of HARDI glyphs, by performing spherical sampling of ODFs. Our spherical sampling approach requires much less sampling than the state-of-the-art volume sampling method, thus providing significantly enhanced performance, scalability, and the ability to perform implicit ray tracing. Our experiments demonstrate that the SIP isosurfaces extracted with our spherical sampling approach can achieve up to 8164× speedup, 37282× memory reduction, and 50.2% less SIP isosurface error compared to the classical volume sampling approach. We demonstrate the efficacy of our methods through experiments on synthetic and human-brain HARDI datasets.</p>\u0000 </div>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"44 3","pages":""},"PeriodicalIF":2.7,"publicationDate":"2025-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cgf.70138","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144681540","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Congrong Ren, Robert Hager, Randy Michael Churchill, Albert Mollén, Seung-Hoe Ku, Choong-Seock Chang, Hanqi Guo
{"title":"Fast and Invertible Simplicial Approximation of Magnetic-Following Interpolation for Visualizing Fusion Plasma Simulation Data","authors":"Congrong Ren, Robert Hager, Randy Michael Churchill, Albert Mollén, Seung-Hoe Ku, Choong-Seock Chang, Hanqi Guo","doi":"10.1111/cgf.70120","DOIUrl":"https://doi.org/10.1111/cgf.70120","url":null,"abstract":"<div>\u0000 \u0000 <p>We introduce a fast and invertible approximation for fusion plasma simulation data represented as 2D planar meshes with connectivities approximating magnetic field lines along the toroidal dimension in deformed 3D toroidal spaces. Scientific variables (e.g., density and temperature) in these fusion data are interpolated following a complex magnetic-field-line-following scheme in the toroidal space represented by a cylindrical coordinate system. This deformation in the 3D space poses challenges for root-finding and interpolation. To this end, we propose a novel paradigm for visualizing and analyzing such data based on a newly developed algorithm for constructing a 3D simplicial mesh within the deformed 3D space. Our algorithm generates a tetrahedral mesh that connects the 2D meshes using tetrahedra while adhering to the constraints on node connectivities imposed by the magnetic field-line scheme. Specifically, we first divide the space into smaller partitions to reduce complexity based on the input geometries and constraints on connectivities. Then, we independently search for a feasible tetrahedralization of each partition, considering nonconvexity. We demonstrate our method with two X-Point Gyrokinetic Code (XGC) simulation datasets on the International Thermonuclear Experimental Reactor (ITER) and Wendelstein 7-X (W7-X), and use an ocean simulation dataset to substantiate broader applicability of our method. An open source implementation of our algorithm is available at https://github.com/rcrcarissa/DeformedSpaceTet.</p>\u0000 </div>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"44 3","pages":""},"PeriodicalIF":2.7,"publicationDate":"2025-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cgf.70120","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144681413","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jacob Miller, Dhruv Bhatia, Helen Purchase, Stephen Kobourov
{"title":"Euclidean, Hyperbolic, and Spherical Networks: An Empirical Study of Matching Network Structure to Best Visualizations","authors":"Jacob Miller, Dhruv Bhatia, Helen Purchase, Stephen Kobourov","doi":"10.1111/cgf.70126","DOIUrl":"https://doi.org/10.1111/cgf.70126","url":null,"abstract":"<div>\u0000 \u0000 <p>We investigate the usability of Euclidean, spherical and hyperbolic geometries for network visualization. Several techniques have been proposed for both spherical and hyperbolic network visualization tools, based on the fact that some networks admit lower embedding error (distortion) in such non-Euclidean geometries. However, it is not yet known whether a lower embedding error translates to human subject benefits, e.g., better task accuracy or lower task completion time. We design, implement, conduct, and analyze a human subjects study to compare Euclidean, spherical and hyperbolic network visualizations using tasks that span the network task taxonomy. While in some cases accuracy and response times are negatively impacted when using non-Euclidean visualizations, the evaluation shows that differences in accuracy for hyperbolic and spherical visualizations are not statistically significant when compared to Euclidean visualizations. Additionally, differences in response times for spherical visualizations are not statistically significant compared to Euclidean visualizations.</p>\u0000 </div>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"44 3","pages":""},"PeriodicalIF":2.7,"publicationDate":"2025-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cgf.70126","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144681409","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An Interactive Visual Enhancement for Prompted Programmatic Weak Supervision in Text Classification","authors":"Y. Lin, S. Wei, H. Zhang, D. Qu, J. Bai","doi":"10.1111/cgf.70131","DOIUrl":"https://doi.org/10.1111/cgf.70131","url":null,"abstract":"<p>Programmatic Weak Supervision (PWS) has emerged as a powerful technique for text classification. By aggregating weak labels provided by manually written label functions, it allows training models on large-scale unlabeled data without the need for costly manual annotations. As an improvement, Prompted PWS incorporates pre-trained large language models (LLMs) as part of the label function, replacing programs coded by experts with natural language prompts. This allows for the more accessible expression of complex and ambiguous concepts. However, the existing workflow does not fully utilize the advantages of Prompted PWS, and the annotators have difficulty in effectively converging their ideas to develop high-quality LFs, and lack support during the iterations. To address this issue, this study improves the existing PWS workflow through interactive visualization. We first propose a collaborative LF development workflow between humans and LLMs, where the large language model assists humans in creating a structured development space for exploration and automatically generates prompted LFs based on human selections. Annotators can integrate their knowledge through informed selection and judgment. Then, we present an interactive visual system that supports efficient development, in-depth exploration, and iteration of LFs. Our evaluation, comprising a quantitative evaluation on the benchmark, a case study, and a user study, demonstrates the effectiveness of our approach.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"44 3","pages":""},"PeriodicalIF":2.7,"publicationDate":"2025-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144681410","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Juntong Chen, Jiang Wu, Jiajing Guo, Vikram Mohanty, Xueming Li, Jorge Piazentin Ono, Wenbin He, Liu Ren, Dongyu Liu
{"title":"InterChat: Enhancing Generative Visual Analytics using Multimodal Interactions","authors":"Juntong Chen, Jiang Wu, Jiajing Guo, Vikram Mohanty, Xueming Li, Jorge Piazentin Ono, Wenbin He, Liu Ren, Dongyu Liu","doi":"10.1111/cgf.70112","DOIUrl":"https://doi.org/10.1111/cgf.70112","url":null,"abstract":"<p>The rise of Large Language Models (LLMs) and generative visual analytics systems has transformed data-driven insights, yet significant challenges persist in accurately interpreting users analytical and interaction intents. While language inputs offer flexibility, they often lack precision, making the expression of complex intents inefficient, error-prone, and time-intensive. To address these limitations, we investigate the design space of multimodal interactions for generative visual analytics through a literature review and pilot brainstorming sessions. Building on these insights, we introduce a highly extensible workflow that integrates multiple LLM agents for intent inference and visualization generation. We develop InterChat, a generative visual analytics system that combines direct manipulation of visual elements with natural language inputs. This integration enables precise intent communication and supports progressive, visually driven exploratory data analyses. By employing effective prompt engineering, and contextual interaction linking, alongside intuitive visualization and interaction designs, InterChat bridges the gap between user interactions and LLM-driven visualizations, enhancing both interpretability and usability. Extensive evaluations, including two usage scenarios, a user study, and expert feedback, demonstrate the effectiveness of InterChat. Results show significant improvements in the accuracy and efficiency of handling complex visual analytics tasks, highlighting the potential of multimodal interactions to redefine user engagement and analytical depth in generative visual analytics.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"44 3","pages":""},"PeriodicalIF":2.7,"publicationDate":"2025-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144681415","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Sca2Gri: Scalable Gridified Scatterplots","authors":"S. Frey","doi":"10.1111/cgf.70141","DOIUrl":"https://doi.org/10.1111/cgf.70141","url":null,"abstract":"<div>\u0000 \u0000 <p>Scatterplots are widely used in exploratory data analysis. Representing data points as glyphs is often crucial for in-depth investigation, but this can lead to significant overlap and visual clutter. Recent post-processing techniques address this issue, but their computational and/or visual scalability is generally limited to thousands of points and unable to effectively deal with large datasets in the order of millions. This paper introduces Sca<sup>2</sup>Gri (Scalable Gridified Scatterplots), a grid-based post-processing method designed for analysis scenarios where the number of data points substantially exceeds the number of glyphs that can be reasonably displayed. Sca<sup>2</sup>Gri enables interactive grid generation for large datasets, offering flexible user control of glyph size, maximum displacement for point to cell mapping, and scatterplot focus area. While Sca<sup>2</sup>Gri's computational complexity scales cubically with the number of cells (which is practically bound to thousands for legible glyph sizes), its complexity is linear with respect to the number of data points, making it highly scalable beyond millions of points.</p>\u0000 </div>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"44 3","pages":""},"PeriodicalIF":2.7,"publicationDate":"2025-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cgf.70141","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144681539","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}