{"title":"The UDV Card Deck: A Collaborative Design Framework to Facilitate Urban Visualization Conversations.","authors":"Damla Cay, Till Nagel, Sebastian Meier","doi":"10.1109/MCG.2025.3556573","DOIUrl":"10.1109/MCG.2025.3556573","url":null,"abstract":"<p><p>This paper presents the Urban Data Visualization (UDV) card deck, a tool designed to facilitate reflective discussions and inform the collaborative design process of urban data visualizations. The UDV card deck was developed to bridge the gap between theoretical knowledge and practice in workshop settings, fostering inclusive and reflective approaches to visualization design. Drawing from urban visualization design literature and the results from a series of expert workshops, these cards summarize key considerations when designing urban data visualizations. The card deck guides different activities in an engaging, collaborative, and structured format, promoting inclusion of diverse urban actors. We introduce the card deck and its goals, demonstrate its use in four case studies, and discuss our findings. Feedback from workshop participants indicates that the UDV card deck can serve as a supportive and reflective tool for urban data visualization researchers, designers and practitioners.</p>","PeriodicalId":55026,"journal":{"name":"IEEE Computer Graphics and Applications","volume":"PP ","pages":""},"PeriodicalIF":1.7,"publicationDate":"2025-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143765959","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Shiyao Li, Margy Adams, Tanvi Sharma, Jay Varner, Lauren Klein
{"title":"What Data Does and Does Not Represent: Visualizing the Archive of Slavery.","authors":"Shiyao Li, Margy Adams, Tanvi Sharma, Jay Varner, Lauren Klein","doi":"10.1109/MCG.2025.3553412","DOIUrl":"10.1109/MCG.2025.3553412","url":null,"abstract":"<p><p>This paper presents a design report on a humanistically-informed data visualization of a dataset related to the trans-Atlantic slave trade. The visualization employs a quantitative dataset of slaving voyages that took place between 1565 and 1858 and uses historical scholarship and humanistic theory in order to call attention to the people behind the data, as well as to what the data does not or cannot represent. In the paper, we summarize the intersecting histories of slavery and data and then outline the theories that inform our design: of the archive of slavery, of the dangers of restaging historical violence, and of visibility, opacity, representation, and resistance. We then describe our design approach and discuss the visualization's ability to honor the lives of the enslaved by calling attention to their acts of resistance, both recorded and unrecorded.</p>","PeriodicalId":55026,"journal":{"name":"IEEE Computer Graphics and Applications","volume":"PP ","pages":""},"PeriodicalIF":1.7,"publicationDate":"2025-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143671867","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"2024 IEEE Scientific Visualization Contest Winner: PlumeViz: Interactive Exploration for Multi-Facet Features of Hydrothermal Plumes in Sonar Images.","authors":"Yiming Shao, Chengming Liu, Zhiyuan Meng, Shufan Qian, Peng Jiang, Yunhai Wang, Qiong Zeng","doi":"10.1109/MCG.2025.3550365","DOIUrl":"10.1109/MCG.2025.3550365","url":null,"abstract":"<p><p>Plume visualization is essential for unveiling the dynamics of hydrothermal systems. This paper introduces an interactive exploration tool, PlumeViz, designed to facilitate the extraction and visualization of multifaceted plume characteristics from data collected by a sonar device. The tool addresses the challenges posed by undersampled volume data and intricate plume structures by providing an interactive platform for plume identification, visual representation, and analysis. Key functionalities of PlumeViz encompass comprehensive plume evolution, plume feature extraction, and in-depth exploration of specific regions of interest. We demonstrate the efficacy of PlumeViz in visualizing hydrothermal plumes through a case study and a range of illustrative results.</p>","PeriodicalId":55026,"journal":{"name":"IEEE Computer Graphics and Applications","volume":"PP ","pages":""},"PeriodicalIF":1.7,"publicationDate":"2025-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143606990","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Towards Collective Storytelling: Investigating Audience Annotations in Data Visualizations.","authors":"Tobias Kauer, Marian Dork, Benjamin Bach","doi":"10.1109/MCG.2025.3547944","DOIUrl":"10.1109/MCG.2025.3547944","url":null,"abstract":"<p><p>This work investigates personal perspectives in visualization annotations as devices for collective data-driven storytelling. Inspired by existing efforts in critical cartography, we show how people share personal memories in a visualization of COVID-19 data and how comments by other visualization readers influence the reading and understanding of visualizations. Analyzing interaction logs, reader surveys, visualization annotations, and interviews, we find that reader annotations help other viewers relate to other people's stories and reflect on their own experiences. Further, we found that annotations embedded directly into the visualization can serve as social traces guiding through a visualization and help readers contextualize their own stories. With that, they supersede the attention paid to data encodings and become the main focal point of the visualization.</p>","PeriodicalId":55026,"journal":{"name":"IEEE Computer Graphics and Applications","volume":"PP ","pages":""},"PeriodicalIF":1.7,"publicationDate":"2025-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143558809","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ioannis Kavouras, Ioannis Rallis, Emmanuel Sardis, Anastasios Doulamis, Nikolaos Doulamis
{"title":"Voting-Based Intervention Planning Using AI-Generated Images.","authors":"Ioannis Kavouras, Ioannis Rallis, Emmanuel Sardis, Anastasios Doulamis, Nikolaos Doulamis","doi":"10.1109/MCG.2025.3553620","DOIUrl":"10.1109/MCG.2025.3553620","url":null,"abstract":"<p><p>The continuous evolution of artificial intelligence and advanced algorithms capable of generating information from simplified input creates new opportunities for several scientific fields. Currently, the applicability of such technologies is limited to art and medical domains, but it can be applied to engineering domains to help the architects and urban planners design environmentally friendly solutions by proposing several alternatives in a short time. This work utilizes the image-inpainting algorithm for suggesting several alternative solutions to four European cities. In addition, this work suggests the utilization of a voting-based framework for finding the most preferred solution for each case study. The voting-based framework involves the participation of citizens and, as a result, decentralizes and democratizes the urban planning process. Finally, this research indicates the importance of deploying generative models in engineering applications by proving that generative AI models are capable of supporting the architects and urban planners in urban planning procedures.</p>","PeriodicalId":55026,"journal":{"name":"IEEE Computer Graphics and Applications","volume":"PP ","pages":"31-46"},"PeriodicalIF":1.7,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143674965","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Alessio Arleo, Rita Borgo, Jorn Kohlhammer, Roy A Ruddle, H Scharlach, Xiaoru Yuan, Melanie Tory, Daniel Keefe
{"title":"Reflections on the Use of Dashboards in the COVID-19 Pandemic.","authors":"Alessio Arleo, Rita Borgo, Jorn Kohlhammer, Roy A Ruddle, H Scharlach, Xiaoru Yuan, Melanie Tory, Daniel Keefe","doi":"10.1109/MCG.2025.3538257","DOIUrl":"https://doi.org/10.1109/MCG.2025.3538257","url":null,"abstract":"<p><p>Dashboards have arguably been the most used visualizations during the COVID-19 pandemic. They were used to communicate its evolution to national governments for disaster mitigation, to the public domain to inform about its status, and to epidemiologists to comprehend and predict the evolution of the disease. Each design had to be tailored for different tasks and to varying audiences-in many cases set up in a very short time due to the urgent need. In this article, we collect notable examples of dashboards and reflect on their use and design during the pandemic from a user-oriented perspective. We interview a group of researchers with varying visualization expertise who actively used dashboards during the pandemic as part of their daily workflow. We discuss our findings and compile a list of lessons learned to support future visualization researchers and dashboard designers.</p>","PeriodicalId":55026,"journal":{"name":"IEEE Computer Graphics and Applications","volume":"45 2","pages":"135-142"},"PeriodicalIF":1.7,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144287147","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tiankai Xie, Jiaqing Chen, Yaoqing Yang, Caleb Geniesse, Ge Shi, Ajinkya Jeevan Chaudhari, John Kevin Cava, Michael W Mahoney, Talita Perciano, Gunther H Weber, Ross Maciejewski
{"title":"LossLens: Diagnostics for Machine Learning Through Loss Landscape Visual Analytics.","authors":"Tiankai Xie, Jiaqing Chen, Yaoqing Yang, Caleb Geniesse, Ge Shi, Ajinkya Jeevan Chaudhari, John Kevin Cava, Michael W Mahoney, Talita Perciano, Gunther H Weber, Ross Maciejewski","doi":"10.1109/MCG.2024.3509374","DOIUrl":"10.1109/MCG.2024.3509374","url":null,"abstract":"<p><p>Modern machine learning often relies on optimizing a neural network's parameters using a loss function to learn complex features. Beyond training, examining the loss function with respect to a network's parameters (i.e., as a loss landscape) can reveal insights into the architecture and learning process. While the local structure of the loss landscape surrounding an individual solution can be characterized using a variety of approaches, the global structure of a loss landscape, which includes potentially many local minima corresponding to different solutions, remains far more difficult to conceptualize and visualize. To address this difficulty, we introduce LossLens, a visual analytics framework that explores loss landscapes at multiple scales. LossLens integrates metrics from global and local scales into a comprehensive visual representation, enhancing model diagnostics. We demonstrate LossLens through two case studies: visualizing how residual connections influence a ResNet-20, and visualizing how physical parameters influence a physics-informed neural network solving a simple convection problem.</p>","PeriodicalId":55026,"journal":{"name":"IEEE Computer Graphics and Applications","volume":"PP ","pages":"112-125"},"PeriodicalIF":1.7,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143544540","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"HoloJig: Interactive Spoken Prompt Specified Generative AI Environments.","authors":"Llogari Casas, Samantha Hannah, Kenny Mitchell","doi":"10.1109/MCG.2025.3553780","DOIUrl":"10.1109/MCG.2025.3553780","url":null,"abstract":"<p><p>HoloJig offers an interactive, speech-to-virtual reality (VR), VR experience that generates diverse environments in real time based on live spoken descriptions. Unlike traditional VR systems that rely on prebuilt assets, HoloJig dynamically creates personalized and immersive virtual spaces with depth-based parallax 3-D rendering, allowing users to define the characteristics of their immersive environment through verbal prompts. This generative approach opens up new possibilities for interactive experiences, including simulations, training, collaborative workspaces, and entertainment. In addition to speech-to-VR environment generation, a key innovation of HoloJig is its progressive visual transition mechanism, which smoothly dissolves between previously generated and newly requested environments, mitigating the delay caused by neural computations. This feature ensures a seamless and continuous user experience, even as new scenes are being rendered on remote servers.</p>","PeriodicalId":55026,"journal":{"name":"IEEE Computer Graphics and Applications","volume":"PP ","pages":"69-77"},"PeriodicalIF":1.7,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143674960","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Omar A Mures, Manuel Silva, Manuel Lijo-Sanchez, Emilio J Padron, Jose A Iglesias-Guitian
{"title":"Should I Render or Should AI Generate? Crafting Synthetic Semantic Segmentation Datasets With Controlled Generation.","authors":"Omar A Mures, Manuel Silva, Manuel Lijo-Sanchez, Emilio J Padron, Jose A Iglesias-Guitian","doi":"10.1109/MCG.2025.3553494","DOIUrl":"10.1109/MCG.2025.3553494","url":null,"abstract":"<p><p>This work explores the integration of generative AI models for automatically generating synthetic image-labeled data. Our approach leverages controllable diffusion models to generate synthetic variations of semantically labeled images. Synthetic datasets for semantic segmentation struggle to represent real-world subtleties, such as different weather conditions or fine details, typically relying on costly simulations and rendering. However, diffusion models can generate diverse images using input text prompts and guidance images, such as semantic masks. Our work introduces and tests a novel methodology for generating labeled synthetic images, with an initial focus on semantic segmentation, a demanding computer vision task. We showcase our approach in two distinct image segmentation domains, outperforming traditional computer graphics simulations in efficiently creating diverse datasets and training downstream models. We leverage generative models for crafting synthetically labeled images, posing the question: \"Should I render or should AI generate?\" Our results endorse a paradigm shift toward controlled generation models.</p>","PeriodicalId":55026,"journal":{"name":"IEEE Computer Graphics and Applications","volume":"PP ","pages":"57-68"},"PeriodicalIF":1.7,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143674963","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Unified Visual Comparison Framework for Human and AI Paintings Using Neural Embeddings and Computational Aesthetics.","authors":"Yilin Ye, Rong Huang, Kang Zhang, Wei Zeng","doi":"10.1109/MCG.2025.3555122","DOIUrl":"10.1109/MCG.2025.3555122","url":null,"abstract":"<p><p>To facilitate comparative analysis of artificial intelligence (AI) and human paintings, we present a unified computational framework combining neural embedding and computational aesthetic features. We first exploit CLIP embedding to provide a projected overview for human and AI painting datasets, and we next leverage computational aesthetic metrics to obtain explainable features of paintings. On that basis, we design a visual analytics system that involves distribution discrepancy measurement for quantifying dataset differences and evolutionary analysis for comparing artists with AI models. Case studies comparing three AI-generated datasets with three human paintings datasets, and analyzing the evolutionary differences between authentic Picasso paintings and AI-generated ones, show the effectiveness of our framework.</p>","PeriodicalId":55026,"journal":{"name":"IEEE Computer Graphics and Applications","volume":"PP ","pages":"19-30"},"PeriodicalIF":1.7,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143765960","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}