IEEE Computer Graphics and Applications最新文献

筛选
英文 中文
The UDV Card Deck: A Collaborative Design Framework to Facilitate Urban Visualization Conversations. UDV卡组:促进城市可视化对话的协作设计框架。
IF 1.7 4区 计算机科学
IEEE Computer Graphics and Applications Pub Date : 2025-03-31 DOI: 10.1109/MCG.2025.3556573
Damla Cay, Till Nagel, Sebastian Meier
{"title":"The UDV Card Deck: A Collaborative Design Framework to Facilitate Urban Visualization Conversations.","authors":"Damla Cay, Till Nagel, Sebastian Meier","doi":"10.1109/MCG.2025.3556573","DOIUrl":"10.1109/MCG.2025.3556573","url":null,"abstract":"<p><p>This paper presents the Urban Data Visualization (UDV) card deck, a tool designed to facilitate reflective discussions and inform the collaborative design process of urban data visualizations. The UDV card deck was developed to bridge the gap between theoretical knowledge and practice in workshop settings, fostering inclusive and reflective approaches to visualization design. Drawing from urban visualization design literature and the results from a series of expert workshops, these cards summarize key considerations when designing urban data visualizations. The card deck guides different activities in an engaging, collaborative, and structured format, promoting inclusion of diverse urban actors. We introduce the card deck and its goals, demonstrate its use in four case studies, and discuss our findings. Feedback from workshop participants indicates that the UDV card deck can serve as a supportive and reflective tool for urban data visualization researchers, designers and practitioners.</p>","PeriodicalId":55026,"journal":{"name":"IEEE Computer Graphics and Applications","volume":"PP ","pages":""},"PeriodicalIF":1.7,"publicationDate":"2025-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143765959","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
What Data Does and Does Not Represent: Visualizing the Archive of Slavery. 数据能代表什么,不能代表什么:将奴隶制档案可视化。
IF 1.7 4区 计算机科学
IEEE Computer Graphics and Applications Pub Date : 2025-03-20 DOI: 10.1109/MCG.2025.3553412
Shiyao Li, Margy Adams, Tanvi Sharma, Jay Varner, Lauren Klein
{"title":"What Data Does and Does Not Represent: Visualizing the Archive of Slavery.","authors":"Shiyao Li, Margy Adams, Tanvi Sharma, Jay Varner, Lauren Klein","doi":"10.1109/MCG.2025.3553412","DOIUrl":"10.1109/MCG.2025.3553412","url":null,"abstract":"<p><p>This paper presents a design report on a humanistically-informed data visualization of a dataset related to the trans-Atlantic slave trade. The visualization employs a quantitative dataset of slaving voyages that took place between 1565 and 1858 and uses historical scholarship and humanistic theory in order to call attention to the people behind the data, as well as to what the data does not or cannot represent. In the paper, we summarize the intersecting histories of slavery and data and then outline the theories that inform our design: of the archive of slavery, of the dangers of restaging historical violence, and of visibility, opacity, representation, and resistance. We then describe our design approach and discuss the visualization's ability to honor the lives of the enslaved by calling attention to their acts of resistance, both recorded and unrecorded.</p>","PeriodicalId":55026,"journal":{"name":"IEEE Computer Graphics and Applications","volume":"PP ","pages":""},"PeriodicalIF":1.7,"publicationDate":"2025-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143671867","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
2024 IEEE Scientific Visualization Contest Winner: PlumeViz: Interactive Exploration for Multi-Facet Features of Hydrothermal Plumes in Sonar Images. 2024年IEEE科学可视化竞赛冠军:PlumeViz:声纳图像中热液羽流多面特征的交互式探索。
IF 1.7 4区 计算机科学
IEEE Computer Graphics and Applications Pub Date : 2025-03-11 DOI: 10.1109/MCG.2025.3550365
Yiming Shao, Chengming Liu, Zhiyuan Meng, Shufan Qian, Peng Jiang, Yunhai Wang, Qiong Zeng
{"title":"2024 IEEE Scientific Visualization Contest Winner: PlumeViz: Interactive Exploration for Multi-Facet Features of Hydrothermal Plumes in Sonar Images.","authors":"Yiming Shao, Chengming Liu, Zhiyuan Meng, Shufan Qian, Peng Jiang, Yunhai Wang, Qiong Zeng","doi":"10.1109/MCG.2025.3550365","DOIUrl":"10.1109/MCG.2025.3550365","url":null,"abstract":"<p><p>Plume visualization is essential for unveiling the dynamics of hydrothermal systems. This paper introduces an interactive exploration tool, PlumeViz, designed to facilitate the extraction and visualization of multifaceted plume characteristics from data collected by a sonar device. The tool addresses the challenges posed by undersampled volume data and intricate plume structures by providing an interactive platform for plume identification, visual representation, and analysis. Key functionalities of PlumeViz encompass comprehensive plume evolution, plume feature extraction, and in-depth exploration of specific regions of interest. We demonstrate the efficacy of PlumeViz in visualizing hydrothermal plumes through a case study and a range of illustrative results.</p>","PeriodicalId":55026,"journal":{"name":"IEEE Computer Graphics and Applications","volume":"PP ","pages":""},"PeriodicalIF":1.7,"publicationDate":"2025-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143606990","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Towards Collective Storytelling: Investigating Audience Annotations in Data Visualizations. 走向集体叙事:调查数据可视化中的受众注释。
IF 1.7 4区 计算机科学
IEEE Computer Graphics and Applications Pub Date : 2025-03-04 DOI: 10.1109/MCG.2025.3547944
Tobias Kauer, Marian Dork, Benjamin Bach
{"title":"Towards Collective Storytelling: Investigating Audience Annotations in Data Visualizations.","authors":"Tobias Kauer, Marian Dork, Benjamin Bach","doi":"10.1109/MCG.2025.3547944","DOIUrl":"10.1109/MCG.2025.3547944","url":null,"abstract":"<p><p>This work investigates personal perspectives in visualization annotations as devices for collective data-driven storytelling. Inspired by existing efforts in critical cartography, we show how people share personal memories in a visualization of COVID-19 data and how comments by other visualization readers influence the reading and understanding of visualizations. Analyzing interaction logs, reader surveys, visualization annotations, and interviews, we find that reader annotations help other viewers relate to other people's stories and reflect on their own experiences. Further, we found that annotations embedded directly into the visualization can serve as social traces guiding through a visualization and help readers contextualize their own stories. With that, they supersede the attention paid to data encodings and become the main focal point of the visualization.</p>","PeriodicalId":55026,"journal":{"name":"IEEE Computer Graphics and Applications","volume":"PP ","pages":""},"PeriodicalIF":1.7,"publicationDate":"2025-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143558809","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Voting-Based Intervention Planning Using AI-Generated Images. 使用人工智能生成的图像进行基于投票的干预计划。
IF 1.7 4区 计算机科学
IEEE Computer Graphics and Applications Pub Date : 2025-03-01 DOI: 10.1109/MCG.2025.3553620
Ioannis Kavouras, Ioannis Rallis, Emmanuel Sardis, Anastasios Doulamis, Nikolaos Doulamis
{"title":"Voting-Based Intervention Planning Using AI-Generated Images.","authors":"Ioannis Kavouras, Ioannis Rallis, Emmanuel Sardis, Anastasios Doulamis, Nikolaos Doulamis","doi":"10.1109/MCG.2025.3553620","DOIUrl":"10.1109/MCG.2025.3553620","url":null,"abstract":"<p><p>The continuous evolution of artificial intelligence and advanced algorithms capable of generating information from simplified input creates new opportunities for several scientific fields. Currently, the applicability of such technologies is limited to art and medical domains, but it can be applied to engineering domains to help the architects and urban planners design environmentally friendly solutions by proposing several alternatives in a short time. This work utilizes the image-inpainting algorithm for suggesting several alternative solutions to four European cities. In addition, this work suggests the utilization of a voting-based framework for finding the most preferred solution for each case study. The voting-based framework involves the participation of citizens and, as a result, decentralizes and democratizes the urban planning process. Finally, this research indicates the importance of deploying generative models in engineering applications by proving that generative AI models are capable of supporting the architects and urban planners in urban planning procedures.</p>","PeriodicalId":55026,"journal":{"name":"IEEE Computer Graphics and Applications","volume":"PP ","pages":"31-46"},"PeriodicalIF":1.7,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143674965","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Reflections on the Use of Dashboards in the COVID-19 Pandemic. 关于在COVID-19大流行期间使用仪表板的思考。
IF 1.7 4区 计算机科学
IEEE Computer Graphics and Applications Pub Date : 2025-03-01 DOI: 10.1109/MCG.2025.3538257
Alessio Arleo, Rita Borgo, Jorn Kohlhammer, Roy A Ruddle, H Scharlach, Xiaoru Yuan, Melanie Tory, Daniel Keefe
{"title":"Reflections on the Use of Dashboards in the COVID-19 Pandemic.","authors":"Alessio Arleo, Rita Borgo, Jorn Kohlhammer, Roy A Ruddle, H Scharlach, Xiaoru Yuan, Melanie Tory, Daniel Keefe","doi":"10.1109/MCG.2025.3538257","DOIUrl":"https://doi.org/10.1109/MCG.2025.3538257","url":null,"abstract":"<p><p>Dashboards have arguably been the most used visualizations during the COVID-19 pandemic. They were used to communicate its evolution to national governments for disaster mitigation, to the public domain to inform about its status, and to epidemiologists to comprehend and predict the evolution of the disease. Each design had to be tailored for different tasks and to varying audiences-in many cases set up in a very short time due to the urgent need. In this article, we collect notable examples of dashboards and reflect on their use and design during the pandemic from a user-oriented perspective. We interview a group of researchers with varying visualization expertise who actively used dashboards during the pandemic as part of their daily workflow. We discuss our findings and compile a list of lessons learned to support future visualization researchers and dashboard designers.</p>","PeriodicalId":55026,"journal":{"name":"IEEE Computer Graphics and Applications","volume":"45 2","pages":"135-142"},"PeriodicalIF":1.7,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144287147","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
LossLens: Diagnostics for Machine Learning Through Loss Landscape Visual Analytics. LossLens:通过损失景观视觉分析进行机器学习诊断。
IF 1.7 4区 计算机科学
IEEE Computer Graphics and Applications Pub Date : 2025-03-01 DOI: 10.1109/MCG.2024.3509374
Tiankai Xie, Jiaqing Chen, Yaoqing Yang, Caleb Geniesse, Ge Shi, Ajinkya Jeevan Chaudhari, John Kevin Cava, Michael W Mahoney, Talita Perciano, Gunther H Weber, Ross Maciejewski
{"title":"LossLens: Diagnostics for Machine Learning Through Loss Landscape Visual Analytics.","authors":"Tiankai Xie, Jiaqing Chen, Yaoqing Yang, Caleb Geniesse, Ge Shi, Ajinkya Jeevan Chaudhari, John Kevin Cava, Michael W Mahoney, Talita Perciano, Gunther H Weber, Ross Maciejewski","doi":"10.1109/MCG.2024.3509374","DOIUrl":"10.1109/MCG.2024.3509374","url":null,"abstract":"<p><p>Modern machine learning often relies on optimizing a neural network's parameters using a loss function to learn complex features. Beyond training, examining the loss function with respect to a network's parameters (i.e., as a loss landscape) can reveal insights into the architecture and learning process. While the local structure of the loss landscape surrounding an individual solution can be characterized using a variety of approaches, the global structure of a loss landscape, which includes potentially many local minima corresponding to different solutions, remains far more difficult to conceptualize and visualize. To address this difficulty, we introduce LossLens, a visual analytics framework that explores loss landscapes at multiple scales. LossLens integrates metrics from global and local scales into a comprehensive visual representation, enhancing model diagnostics. We demonstrate LossLens through two case studies: visualizing how residual connections influence a ResNet-20, and visualizing how physical parameters influence a physics-informed neural network solving a simple convection problem.</p>","PeriodicalId":55026,"journal":{"name":"IEEE Computer Graphics and Applications","volume":"PP ","pages":"112-125"},"PeriodicalIF":1.7,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143544540","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
HoloJig: Interactive Spoken Prompt Specified Generative AI Environments. HoloJig:交互式语音提示指定生成AI环境。
IF 1.7 4区 计算机科学
IEEE Computer Graphics and Applications Pub Date : 2025-03-01 DOI: 10.1109/MCG.2025.3553780
Llogari Casas, Samantha Hannah, Kenny Mitchell
{"title":"HoloJig: Interactive Spoken Prompt Specified Generative AI Environments.","authors":"Llogari Casas, Samantha Hannah, Kenny Mitchell","doi":"10.1109/MCG.2025.3553780","DOIUrl":"10.1109/MCG.2025.3553780","url":null,"abstract":"<p><p>HoloJig offers an interactive, speech-to-virtual reality (VR), VR experience that generates diverse environments in real time based on live spoken descriptions. Unlike traditional VR systems that rely on prebuilt assets, HoloJig dynamically creates personalized and immersive virtual spaces with depth-based parallax 3-D rendering, allowing users to define the characteristics of their immersive environment through verbal prompts. This generative approach opens up new possibilities for interactive experiences, including simulations, training, collaborative workspaces, and entertainment. In addition to speech-to-VR environment generation, a key innovation of HoloJig is its progressive visual transition mechanism, which smoothly dissolves between previously generated and newly requested environments, mitigating the delay caused by neural computations. This feature ensures a seamless and continuous user experience, even as new scenes are being rendered on remote servers.</p>","PeriodicalId":55026,"journal":{"name":"IEEE Computer Graphics and Applications","volume":"PP ","pages":"69-77"},"PeriodicalIF":1.7,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143674960","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Should I Render or Should AI Generate? Crafting Synthetic Semantic Segmentation Datasets With Controlled Generation. 我应该渲染还是AI生成?合成语义分割数据集与控制生成。
IF 1.7 4区 计算机科学
IEEE Computer Graphics and Applications Pub Date : 2025-03-01 DOI: 10.1109/MCG.2025.3553494
Omar A Mures, Manuel Silva, Manuel Lijo-Sanchez, Emilio J Padron, Jose A Iglesias-Guitian
{"title":"Should I Render or Should AI Generate? Crafting Synthetic Semantic Segmentation Datasets With Controlled Generation.","authors":"Omar A Mures, Manuel Silva, Manuel Lijo-Sanchez, Emilio J Padron, Jose A Iglesias-Guitian","doi":"10.1109/MCG.2025.3553494","DOIUrl":"10.1109/MCG.2025.3553494","url":null,"abstract":"<p><p>This work explores the integration of generative AI models for automatically generating synthetic image-labeled data. Our approach leverages controllable diffusion models to generate synthetic variations of semantically labeled images. Synthetic datasets for semantic segmentation struggle to represent real-world subtleties, such as different weather conditions or fine details, typically relying on costly simulations and rendering. However, diffusion models can generate diverse images using input text prompts and guidance images, such as semantic masks. Our work introduces and tests a novel methodology for generating labeled synthetic images, with an initial focus on semantic segmentation, a demanding computer vision task. We showcase our approach in two distinct image segmentation domains, outperforming traditional computer graphics simulations in efficiently creating diverse datasets and training downstream models. We leverage generative models for crafting synthetically labeled images, posing the question: \"Should I render or should AI generate?\" Our results endorse a paradigm shift toward controlled generation models.</p>","PeriodicalId":55026,"journal":{"name":"IEEE Computer Graphics and Applications","volume":"PP ","pages":"57-68"},"PeriodicalIF":1.7,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143674963","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Unified Visual Comparison Framework for Human and AI Paintings Using Neural Embeddings and Computational Aesthetics. 使用神经嵌入和计算美学的人类和人工智能绘画的统一视觉比较框架。
IF 1.7 4区 计算机科学
IEEE Computer Graphics and Applications Pub Date : 2025-03-01 DOI: 10.1109/MCG.2025.3555122
Yilin Ye, Rong Huang, Kang Zhang, Wei Zeng
{"title":"Unified Visual Comparison Framework for Human and AI Paintings Using Neural Embeddings and Computational Aesthetics.","authors":"Yilin Ye, Rong Huang, Kang Zhang, Wei Zeng","doi":"10.1109/MCG.2025.3555122","DOIUrl":"10.1109/MCG.2025.3555122","url":null,"abstract":"<p><p>To facilitate comparative analysis of artificial intelligence (AI) and human paintings, we present a unified computational framework combining neural embedding and computational aesthetic features. We first exploit CLIP embedding to provide a projected overview for human and AI painting datasets, and we next leverage computational aesthetic metrics to obtain explainable features of paintings. On that basis, we design a visual analytics system that involves distribution discrepancy measurement for quantifying dataset differences and evolutionary analysis for comparing artists with AI models. Case studies comparing three AI-generated datasets with three human paintings datasets, and analyzing the evolutionary differences between authentic Picasso paintings and AI-generated ones, show the effectiveness of our framework.</p>","PeriodicalId":55026,"journal":{"name":"IEEE Computer Graphics and Applications","volume":"PP ","pages":"19-30"},"PeriodicalIF":1.7,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143765960","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信