IEEE Computer Graphics and Applications最新文献

筛选
英文 中文
Critical Interactivity: Exploration and Narration in Data Visualization. 关键的互动性:数据可视化中的探索和叙述。
IF 1.7 4区 计算机科学
IEEE Computer Graphics and Applications Pub Date : 2025-04-08 DOI: 10.1109/MCG.2025.3544684
Francesca Morini, Manuela Garreton, Jona Pomerance, Nadia Zeissig, Sabine de Guenther, Fidel Thomet, Linda Freyberg, Ilias Kyriazis, Andrea Scholz, Marian Dork
{"title":"Critical Interactivity: Exploration and Narration in Data Visualization.","authors":"Francesca Morini, Manuela Garreton, Jona Pomerance, Nadia Zeissig, Sabine de Guenther, Fidel Thomet, Linda Freyberg, Ilias Kyriazis, Andrea Scholz, Marian Dork","doi":"10.1109/MCG.2025.3544684","DOIUrl":"https://doi.org/10.1109/MCG.2025.3544684","url":null,"abstract":"<p><p>We propose critical interactivity as a concept to study and design the dynamic and transitory aspects of data visualizations. Theoretically, interactivity is often described as the means to support analytical tasks, while in practice, it encompasses the techniques that alter visual representations. These notions are a useful starting point to study the role of interactivity in critical engagements with data visualizations. At the core of critical interactivity is the negotiation of authority and agency: authority as authors provide structure and context, and agency as viewers navigate and interpret the data on their own terms. This raises the critical question: who has the power to control the visualization? Drawing from four case studies in science communication, art history, anthropology, and climate advocacy, we examine how critical interactivity links exploration and narration. We reflect on the effort involved in preparing data, and propose design strategies for implementing critical interactivity in data visualization.</p>","PeriodicalId":55026,"journal":{"name":"IEEE Computer Graphics and Applications","volume":"PP ","pages":""},"PeriodicalIF":1.7,"publicationDate":"2025-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143812859","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MarsIPAN: Optimization and Negotiations in Mars Sample Return Scheduling Coordination. MarsIPAN:火星样品返回调度协调的优化与协商。
IF 1.7 4区 计算机科学
IEEE Computer Graphics and Applications Pub Date : 2025-04-07 DOI: 10.1109/MCG.2025.3558426
Jasmine T Otto, Malika Khurana, Noah Deutsch, Benjamin P S Donitz, Oskar Elek, Scott Davidoff
{"title":"MarsIPAN: Optimization and Negotiations in Mars Sample Return Scheduling Coordination.","authors":"Jasmine T Otto, Malika Khurana, Noah Deutsch, Benjamin P S Donitz, Oskar Elek, Scott Davidoff","doi":"10.1109/MCG.2025.3558426","DOIUrl":"https://doi.org/10.1109/MCG.2025.3558426","url":null,"abstract":"<p><p>Resource allocation problems touch almost every aspect of modernity. We examine communication bandwidth optimization and negotiation in NASA's early stage Mars Sample Return (MSR) mission, which places multiple robots into a single region on Mars. We present a design study conducted over two years at the NASA Jet Propulsion Laboratory with MSR, which characterizes the design and evaluation of the deployed MarsIPAN schedule browser. We find that MarsIPAN changes the schedule analysis process, providing new insight into how bandwidth is allocated, and enabling individual spacecraft teams to openly negotiate for scarce resources. This visibility leads to changes in how spacecraft teams apportion bandwidth, plan staffing, and organize and share resources. We reflect on the design study methodology as revealing, documenting, and supporting existing communication processes and software infrastructure within knowledge-intensive organizations.</p>","PeriodicalId":55026,"journal":{"name":"IEEE Computer Graphics and Applications","volume":"PP ","pages":""},"PeriodicalIF":1.7,"publicationDate":"2025-04-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143804900","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Computational Design and Fabrication of Protective Foam. 防护泡沫的计算设计与制造。
IF 1.7 4区 计算机科学
IEEE Computer Graphics and Applications Pub Date : 2025-04-01 DOI: 10.1109/MCG.2025.3556656
Tsukasa Fukusato, Naoki Kita
{"title":"Computational Design and Fabrication of Protective Foam.","authors":"Tsukasa Fukusato, Naoki Kita","doi":"10.1109/MCG.2025.3556656","DOIUrl":"https://doi.org/10.1109/MCG.2025.3556656","url":null,"abstract":"<p><p>This paper proposes a method to design protective foam for packaging 3D objects. Users first load a 3D object and define a block-based design space by setting the block resolution and the size of each block. The system then constructs a block map in the space using depth textures of the input object, separates the map into two regions, and outputs the regions as foams. The proposed method is fast and stable, allowing the user to interactively make protective foams. The generated foam is a height field in each direction, so the foams can easily be fabricated using various materials, such as LEGO blocks, sponge with slits, glass, and wood. This paper shows some examples of fabrication results to demonstrate the robustness of our system. In addition, we conducted a user study and confirmed that our system is effective for manually designing protective foams envisioned by users.</p>","PeriodicalId":55026,"journal":{"name":"IEEE Computer Graphics and Applications","volume":"PP ","pages":""},"PeriodicalIF":1.7,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143765957","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Unified Visual Comparison Framework for Human and AI Paintings using Neural Embeddings and Computational Aesthetics. 使用神经嵌入和计算美学的人类和人工智能绘画的统一视觉比较框架。
IF 1.7 4区 计算机科学
IEEE Computer Graphics and Applications Pub Date : 2025-04-01 DOI: 10.1109/MCG.2025.3555122
Yilin Ye, Rong Huang, Kang Zhang, Wei Zeng
{"title":"Unified Visual Comparison Framework for Human and AI Paintings using Neural Embeddings and Computational Aesthetics.","authors":"Yilin Ye, Rong Huang, Kang Zhang, Wei Zeng","doi":"10.1109/MCG.2025.3555122","DOIUrl":"10.1109/MCG.2025.3555122","url":null,"abstract":"<p><p>To facilitate comparative analysis of AI and human paintings, we present a unified computational framework combining neural embedding and computational aesthetic features. We first exploit CLIP embedding to provide a projected overview for human and AI painting datasets, and we next leverage computational aesthetic metrics to obtain explainable features of paintings. On the basis, we design a visual analytics system that involves distribution discrepancy measurement for quantifying dataset differences and evolutionary analysis for comparing artists with AI models. Case studies comparing three AI-generated datasets with three human paintings datasets, and analyzing the evolutionary differences between authentic Picasso paintings and AI-generated ones, show the effectiveness of our framework.</p>","PeriodicalId":55026,"journal":{"name":"IEEE Computer Graphics and Applications","volume":"PP ","pages":""},"PeriodicalIF":1.7,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143765960","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The UDV Card Deck: A Collaborative Design Framework to Facilitate Urban Visualization Conversations. UDV卡组:促进城市可视化对话的协作设计框架。
IF 1.7 4区 计算机科学
IEEE Computer Graphics and Applications Pub Date : 2025-03-31 DOI: 10.1109/MCG.2025.3556573
Damla Cay, Till Nagel, Sebastian Meier
{"title":"The UDV Card Deck: A Collaborative Design Framework to Facilitate Urban Visualization Conversations.","authors":"Damla Cay, Till Nagel, Sebastian Meier","doi":"10.1109/MCG.2025.3556573","DOIUrl":"10.1109/MCG.2025.3556573","url":null,"abstract":"<p><p>This paper presents the Urban Data Visualization (UDV) card deck, a tool designed to facilitate reflective discussions and inform the collaborative design process of urban data visualizations. The UDV card deck was developed to bridge the gap between theoretical knowledge and practice in workshop settings, fostering inclusive and reflective approaches to visualization design. Drawing from urban visualization design literature and the results from a series of expert workshops, these cards summarize key considerations when designing urban data visualizations. The card deck guides different activities in an engaging, collaborative, and structured format, promoting inclusion of diverse urban actors. We introduce the card deck and its goals, demonstrate its use in four case studies, and discuss our findings. Feedback from workshop participants indicates that the UDV card deck can serve as a supportive and reflective tool for urban data visualization researchers, designers and practitioners.</p>","PeriodicalId":55026,"journal":{"name":"IEEE Computer Graphics and Applications","volume":"PP ","pages":""},"PeriodicalIF":1.7,"publicationDate":"2025-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143765959","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Meet-In-Style: Text-driven Real-time Video Stylization using Diffusion Models. 风格相遇:使用扩散模型的文本驱动的实时视频风格化。
IF 1.7 4区 计算机科学
IEEE Computer Graphics and Applications Pub Date : 2025-03-24 DOI: 10.1109/MCG.2025.3554312
David Kunz, Ondrej Texler, David Mould, Daniel Sykora
{"title":"Meet-In-Style: Text-driven Real-time Video Stylization using Diffusion Models.","authors":"David Kunz, Ondrej Texler, David Mould, Daniel Sykora","doi":"10.1109/MCG.2025.3554312","DOIUrl":"10.1109/MCG.2025.3554312","url":null,"abstract":"<p><p>We present Meet-In-Style-a new approach to real-time stylization of live video streams using text prompts. In contrast to previous text-based techniques, our system is able to stylize input video at 30 fps on commodity graphics hardware while preserving structural consistency of the stylized sequence and minimizing temporal flicker. A key idea of our approach is to combine diffusion-based image stylization with a few-shot patch-based training strategy that can produce a custom image-to-image stylization network with real-time inference capabilities. Such a combination not only allows for fast stylization, but also greatly improves consistency of individual stylized frames compared to a scenario where diffusion is applied to each video frame separately. We conducted a number of user experiments in which we found our approach to be particularly useful in video conference scenarios enabling participants to interactively apply different visual styles to themselves (or to each other) to enhance the overall chatting experience.</p>","PeriodicalId":55026,"journal":{"name":"IEEE Computer Graphics and Applications","volume":"PP ","pages":""},"PeriodicalIF":1.7,"publicationDate":"2025-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143702373","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Voting-Based Intervention Planning Using AI-Generated Images. 使用人工智能生成的图像进行基于投票的干预计划。
IF 1.7 4区 计算机科学
IEEE Computer Graphics and Applications Pub Date : 2025-03-21 DOI: 10.1109/MCG.2025.3553620
Ioannis Kavouras, Ioannis Rallis, Emmanuel Sardis, Anastasios Doulamis, Nikolaos Doulamis
{"title":"Voting-Based Intervention Planning Using AI-Generated Images.","authors":"Ioannis Kavouras, Ioannis Rallis, Emmanuel Sardis, Anastasios Doulamis, Nikolaos Doulamis","doi":"10.1109/MCG.2025.3553620","DOIUrl":"10.1109/MCG.2025.3553620","url":null,"abstract":"<p><p>The continuous evolution of artificial intelligence and advanced algorithms capable of generating information from simplified input creates new opportunities for several scientific fields. Currently, the applicability of such technologies is limited to art and medical domains, but it can be applied to engineering domains to help the architects and urban planners design environmentally friendly solutions by proposing several alternatives in a short time. This work utilizes the image-inpainting algorithm for suggesting several alternative solutions to four European cities. In addition, this work suggests the utilization of a voting-based framework for finding the most preferred solution for each case study. The voting-based framework involves the participation of citizens and as a result decentralizes and democratizes the urban planning process. Finally, this research indicates the importance of deploying generative models in engineering applications, by proving that generative AI models are capable of supporting the architects and urban planners in urban planning procedures.</p>","PeriodicalId":55026,"journal":{"name":"IEEE Computer Graphics and Applications","volume":"PP ","pages":""},"PeriodicalIF":1.7,"publicationDate":"2025-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143674965","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
HoloJig: Interactive Spoken Prompt Specified Generative AI Environments. HoloJig:交互式语音提示指定生成AI环境。
IF 1.7 4区 计算机科学
IEEE Computer Graphics and Applications Pub Date : 2025-03-21 DOI: 10.1109/MCG.2025.3553780
Llogari Casas, Samantha Hannah, Kenny Mitchell
{"title":"HoloJig: Interactive Spoken Prompt Specified Generative AI Environments.","authors":"Llogari Casas, Samantha Hannah, Kenny Mitchell","doi":"10.1109/MCG.2025.3553780","DOIUrl":"10.1109/MCG.2025.3553780","url":null,"abstract":"<p><p>HoloJig offers an interactive, speech-to-virtual reality (VR), VR experience that generates diverse environments in real time based on live spoken descriptions. Unlike traditional VR systems that rely on prebuilt assets, HoloJig dynamically creates personalized and immersive virtual spaces with depth-based parallax 3D rendering, allowing users to define the characteristics of their immersive environment through verbal prompts. This generative approach opens up new possibilities for interactive experiences, including simulations, training, collaborative workspaces, and entertainment. In addition to speech-to-VR environment generation, a key innovation of HoloJig is its progressive visual transition mechanism, which smoothly dissolves between previously generated and newly requested environments, mitigating the delay caused by neural computations. This feature ensures a seamless and continuous user experience, even as new scenes are being rendered on remote servers.</p>","PeriodicalId":55026,"journal":{"name":"IEEE Computer Graphics and Applications","volume":"PP ","pages":""},"PeriodicalIF":1.7,"publicationDate":"2025-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143674960","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Should I render or should AI Generate? Crafting Synthetic Semantic Segmentation Datasets with Controlled Generation. 我应该渲染还是AI生成?合成语义分割数据集与控制生成。
IF 1.7 4区 计算机科学
IEEE Computer Graphics and Applications Pub Date : 2025-03-21 DOI: 10.1109/MCG.2025.3553494
Omar A Mures, Manuel Silva, Manuel Lijo-Sanchez, Emilio J Padron, Jose A Iglesias-Guitian
{"title":"Should I render or should AI Generate? Crafting Synthetic Semantic Segmentation Datasets with Controlled Generation.","authors":"Omar A Mures, Manuel Silva, Manuel Lijo-Sanchez, Emilio J Padron, Jose A Iglesias-Guitian","doi":"10.1109/MCG.2025.3553494","DOIUrl":"10.1109/MCG.2025.3553494","url":null,"abstract":"<p><p>This work explores the integration of generative AI models for automatically generating synthetic image-labeled data. Our approach leverages controllable Diffusion Models to generate synthetic variations of semantically labeled images. Synthetic datasets for semantic segmentation struggle to represent real-world subtleties, such as different weather conditions or fine details, typically relying on costly simulations and rendering. However, Diffusion Models can generate diverse images using input text prompts and guidance images, like semantic masks. Our work introduces and tests a novel methodology for generating labeled synthetic images, with an initial focus on semantic segmentation, a demanding computer vision task. We showcase our approach in two distinct image segmentation domains, outperforming traditional computer graphics simulations in efficiently creating diverse datasets and training downstream models. We leverage generative models for crafting synthetically labeled images, posing the question: \"Should I render or should AI generate?\". Our results endorse a paradigm shift towards controlled generation models.</p>","PeriodicalId":55026,"journal":{"name":"IEEE Computer Graphics and Applications","volume":"PP ","pages":""},"PeriodicalIF":1.7,"publicationDate":"2025-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143674963","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
What Data Does and Does Not Represent: Visualizing the Archive of Slavery. 数据能代表什么,不能代表什么:将奴隶制档案可视化。
IF 1.7 4区 计算机科学
IEEE Computer Graphics and Applications Pub Date : 2025-03-20 DOI: 10.1109/MCG.2025.3553412
Shiyao Li, Margy Adams, Tanvi Sharma, Jay Varner, Lauren Klein
{"title":"What Data Does and Does Not Represent: Visualizing the Archive of Slavery.","authors":"Shiyao Li, Margy Adams, Tanvi Sharma, Jay Varner, Lauren Klein","doi":"10.1109/MCG.2025.3553412","DOIUrl":"10.1109/MCG.2025.3553412","url":null,"abstract":"<p><p>This paper presents a design report on a humanistically-informed data visualization of a dataset related to the trans-Atlantic slave trade. The visualization employs a quantitative dataset of slaving voyages that took place between 1565 and 1858 and uses historical scholarship and humanistic theory in order to call attention to the people behind the data, as well as to what the data does not or cannot represent. In the paper, we summarize the intersecting histories of slavery and data and then outline the theories that inform our design: of the archive of slavery, of the dangers of restaging historical violence, and of visibility, opacity, representation, and resistance. We then describe our design approach and discuss the visualization's ability to honor the lives of the enslaved by calling attention to their acts of resistance, both recorded and unrecorded.</p>","PeriodicalId":55026,"journal":{"name":"IEEE Computer Graphics and Applications","volume":"PP ","pages":""},"PeriodicalIF":1.7,"publicationDate":"2025-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143671867","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信