{"title":"2024 IEEE Scientific Visualization Contest Winner: PlumeViz: Interactive Exploration for Multi-Facet Features of Hydrothermal Plumes in Sonar Images.","authors":"Yiming Shao, Chengming Liu, Zhiyuan Meng, Shufan Qian, Peng Jiang, Yunhai Wang, Qiong Zeng","doi":"10.1109/MCG.2025.3550365","DOIUrl":"10.1109/MCG.2025.3550365","url":null,"abstract":"<p><p>Plume visualization is essential for unveiling the dynamics of hydrothermal systems. This paper introduces an interactive exploration tool, PlumeViz, designed to facilitate the extraction and visualization of multifaceted plume characteristics from data collected by a sonar device. The tool addresses the challenges posed by undersampled volume data and intricate plume structures by providing an interactive platform for plume identification, visual representation, and analysis. Key functionalities of PlumeViz encompass comprehensive plume evolution, plume feature extraction, and in-depth exploration of specific regions of interest. We demonstrate the efficacy of PlumeViz in visualizing hydrothermal plumes through a case study and a range of illustrative results.</p>","PeriodicalId":55026,"journal":{"name":"IEEE Computer Graphics and Applications","volume":"PP ","pages":""},"PeriodicalIF":1.7,"publicationDate":"2025-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143606990","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Erin McGowan, Joao Rulff, Sonia Castelo, Guande Wu, Shaoyu Chen, Roque Lopez, Bea Steers, Iran R Roman, Fabio F Dias, Jing Qian, Parikshit Solunke, Michael Middleton, Ryan McKendrick, Claudio T Silva
{"title":"Design and Implementation of the Transparent, Interpretable, and Multimodal (TIM) AR Personal Assistant.","authors":"Erin McGowan, Joao Rulff, Sonia Castelo, Guande Wu, Shaoyu Chen, Roque Lopez, Bea Steers, Iran R Roman, Fabio F Dias, Jing Qian, Parikshit Solunke, Michael Middleton, Ryan McKendrick, Claudio T Silva","doi":"10.1109/MCG.2025.3549696","DOIUrl":"https://doi.org/10.1109/MCG.2025.3549696","url":null,"abstract":"<p><p>The concept of an AI assistant for task guidance is rapidly shifting from a science fiction staple to an impending reality. Such a system is inherently complex, requiring models for perceptual grounding, attention, and reasoning, an intuitive interface that adapts to the performer's needs, and the orchestration of data streams from many sensors. Moreover, all data acquired by the system must be readily available for post-hoc analysis to enable developers to understand performer behavior and quickly detect failures. We introduce TIM, the first end-to-end AI-enabled task guidance system in augmented reality which is capable of detecting both the user and scene as well as providing adaptable, just-in-time feedback. We discuss the system challenges and propose design solutions. We also demonstrate how TIM adapts to domain applications with varying needs, highlighting how the system components can be customized for each scenario.</p>","PeriodicalId":55026,"journal":{"name":"IEEE Computer Graphics and Applications","volume":"PP ","pages":""},"PeriodicalIF":1.7,"publicationDate":"2025-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143598482","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Muhamamd Zeshan Afzal, Sk Aziz Ali, Didier Stricker, Peter Eisert, Anna Hilsmann, Daniel Perez-Marcos, Marco Bianchi, Sonia Crottaz-Herbette, Roberto De Ioris, Eleni Mangina, Mirco Sanguineti, Ander Salaberria, Oier Lopez de Lacalle, Aitor Garcia-Pablos, Montse Cuadros
{"title":"Next Generation XR Systems-Large Language Models Meet Augmented and Virtual Reality.","authors":"Muhamamd Zeshan Afzal, Sk Aziz Ali, Didier Stricker, Peter Eisert, Anna Hilsmann, Daniel Perez-Marcos, Marco Bianchi, Sonia Crottaz-Herbette, Roberto De Ioris, Eleni Mangina, Mirco Sanguineti, Ander Salaberria, Oier Lopez de Lacalle, Aitor Garcia-Pablos, Montse Cuadros","doi":"10.1109/MCG.2025.3548554","DOIUrl":"https://doi.org/10.1109/MCG.2025.3548554","url":null,"abstract":"<p><p>Extended Reality (XR) is evolving rapidly, offering new paradigms for humancomputer interaction. This position paper argues that integrating Large Language Models (LLMs) with XR systems represents a fundamental shift toward more intelligent, context-aware, and adaptive mixed-reality experiences. We propose a structured framework built on three key pillars: (1) Perception and Situational Awareness, (2) Knowledge Modeling and Reasoning, and (3) Visualization and Interaction. We believe leveraging LLMs within XR environments enables enhanced situational awareness, real-time knowledge retrieval, and dynamic user interaction, surpassing traditional XR capabilities. We highlight the potential of this integration in neurorehabilitation, safety training, and architectural design while underscoring ethical considerations such as privacy, transparency, and inclusivity. This vision aims to spark discussion and drive research toward more intelligent, human-centric XR systems.</p>","PeriodicalId":55026,"journal":{"name":"IEEE Computer Graphics and Applications","volume":"PP ","pages":""},"PeriodicalIF":1.7,"publicationDate":"2025-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143574654","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Towards Collective Storytelling: Investigating Audience Annotations in Data Visualizations.","authors":"Tobias Kauer, Marian Dork, Benjamin Bach","doi":"10.1109/MCG.2025.3547944","DOIUrl":"10.1109/MCG.2025.3547944","url":null,"abstract":"<p><p>This work investigates personal perspectives in visualization annotations as devices for collective data-driven storytelling. Inspired by existing efforts in critical cartography, we show how people share personal memories in a visualization of COVID-19 data and how comments by other visualization readers influence the reading and understanding of visualizations. Analyzing interaction logs, reader surveys, visualization annotations, and interviews, we find that reader annotations help other viewers relate to other people's stories and reflect on their own experiences. Further, we found that annotations embedded directly into the visualization can serve as social traces guiding through a visualization and help readers contextualize their own stories. With that, they supersede the attention paid to data encodings and become the main focal point of the visualization.</p>","PeriodicalId":55026,"journal":{"name":"IEEE Computer Graphics and Applications","volume":"PP ","pages":""},"PeriodicalIF":1.7,"publicationDate":"2025-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143558809","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Data-Inflected Visions of Feminicide.","authors":"Helena Suarez Val","doi":"10.1109/MCG.2025.3543025","DOIUrl":"10.1109/MCG.2025.3543025","url":null,"abstract":"<p><p>This paper advances the notion of \"data-inflected visions\" to show how various visual representations may come come to be imagined as data, and how doing so opens up different meanings for the political and affective work of data. The visuality of social issues is produced through competing hegemonic and alternative visions, and conventional visualization is not the only format in which data participate in visual contestation. Focusing on Latin American actions to visibilizar feminicide, I propose an encounter with activist-made imagery to elucidate how data participate in alternative representations of the issue. The article contributes both an exploration of the role of data in constructing how feminicide is seen and a novel approach to study data and visuality, to inspire scholars from visual studies and from feminist and critical data and data visualization studies to engage with images beyond conventional graphic representation as sites for the political affective work of data.</p>","PeriodicalId":55026,"journal":{"name":"IEEE Computer Graphics and Applications","volume":"PP ","pages":""},"PeriodicalIF":1.7,"publicationDate":"2025-02-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143558808","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Accelerate Cutting Tasks in Real-Time Interactive Cutting Simulation of Deformable Objects.","authors":"Shiyu Jia, Qian Dong, Zhenkuan Pan, Xiaokang Yu, Wenli Xiu, Jingli Zhang","doi":"10.1109/MCG.2025.3538985","DOIUrl":"10.1109/MCG.2025.3538985","url":null,"abstract":"<p><p>Simulation speed is crucial for virtual reality simulators involving real-time interactive cutting of deformable objects, such as surgical simulators. Previous efforts to accelerate these simulations resulted in significant speed increases during non-cutting periods, but only moderate ones during cutting periods. This paper aims to further increase the latter. Three novel methods are proposed: (1) GPU-based update of mass and stiffness matrices of composite finite elements. (2) GPU-based collision processing between cutting tools and deformable objects. (3) Redesigned CPU-GPU synchronization mechanisms combined with GPU acceleration for the update of the surface mesh. Simulation tests, including a complex hepatectomy simulation, are performed. Results show that our methods increase the simulation speed during cutting periods by 40.4% to 56.5.</p>","PeriodicalId":55026,"journal":{"name":"IEEE Computer Graphics and Applications","volume":"PP ","pages":""},"PeriodicalIF":1.7,"publicationDate":"2025-02-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143544547","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Augmented Reality Art Museum Mobile Guide for Enhancing User Experience.","authors":"Tung-Ju Hsieh, Yao-Hua Su, Li-Sen Lin","doi":"10.1109/MCG.2025.3529981","DOIUrl":"https://doi.org/10.1109/MCG.2025.3529981","url":null,"abstract":"<p><p>The advancement of augmented reality technology provides means for next generation art museum tour guides. In this study, we develop an AR navigation and multimedia guide for mobile devices and head-mounted displays. Visitors follow virtual routes to the exhibits and switch to multimedia commentary mode, pointing to exhibits with their phone camera for commentary information. Two case studies are conducted to evaluate the proposed system. The results show that the proposed system outperforms the guided tour and brochure tour in enhancing the user experience and attracting visitors to explore the exhibitions. This research contributes to the field of AR technology and cultural heritage education by offering a mobile application for indoor navigation and artwork exploring, allowing art museum visitors to actively engage with the exhibits and enhance their understanding while maintaining their interests.</p>","PeriodicalId":55026,"journal":{"name":"IEEE Computer Graphics and Applications","volume":"PP ","pages":""},"PeriodicalIF":1.7,"publicationDate":"2025-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143544549","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Semi-Automated Pipeline for the Creation of Virtual Fitting Room Experiences Featuring Motion Capture and Cloth Simulation.","authors":"Alberto Cannavo, Giacomo Offre, Fabrizio Lamberti","doi":"10.1109/MCG.2024.3521716","DOIUrl":"https://doi.org/10.1109/MCG.2024.3521716","url":null,"abstract":"<p><p>Technological advancements are prompting the digitization of many industries, including fashion. Many brands are exploring ways to enhance customers' experience, e.g., offering new shopping-oriented services like Virtual Fitting Rooms (VFRs). However, there are still challenges that prevent customers from effectively using these tools for trying on digital garments. Challenges are associated with difficulties in obtaining high-fidelity reconstructions of body shapes and providing realistic visualizations of animated clothes following real-time customers' movements. This paper tackles such lacks by proposing a semi-automated pipeline supporting the creation of VFR experiences by exploiting state-of-the-art techniques for the accurate description and reconstruction of customers' 3D avatars, motion capture-based animation, as well as realistic garment design and simulation. A user study in which the resulting VFR experience was compared with those created with two existing tools showed the benefits of the devised solution in terms of usability, embodiment, model accuracy, perceived value, adoption and purchase intention.</p>","PeriodicalId":55026,"journal":{"name":"IEEE Computer Graphics and Applications","volume":"PP ","pages":""},"PeriodicalIF":1.7,"publicationDate":"2024-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143544570","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tiankai Xie, Jiaqing Chen, Yaoqing Yang, Caleb Geniesse, Ge Shi, Ajinkya Chaudhari, John Kevin Cava, Michael W Mahoney, Talita Perciano, Gunther H Weber, Ross Maciejewski
{"title":"LossLens: Diagnostics for Machine Learning Through Loss Landscape Visual Analytics.","authors":"Tiankai Xie, Jiaqing Chen, Yaoqing Yang, Caleb Geniesse, Ge Shi, Ajinkya Chaudhari, John Kevin Cava, Michael W Mahoney, Talita Perciano, Gunther H Weber, Ross Maciejewski","doi":"10.1109/MCG.2024.3509374","DOIUrl":"https://doi.org/10.1109/MCG.2024.3509374","url":null,"abstract":"<p><p>Modern machine learning often relies on optimizing a neural network's parameters using a loss function to learn complex features. Beyond training, examining the loss function with respect to a network's parameters (i.e., as a loss landscape) can reveal insights into the architecture and learning process. While the local structure of the loss landscape surrounding an individual solution can be characterized using a variety of approaches, the global structure of a loss landscape, which includes potentially many local minima corresponding to different solutions, remains far more difficult to conceptualize and visualize. To address this difficulty, we introduce LossLens, a visual analytics framework that explores loss landscapes at multiple scales. LossLens integrates metrics from global and local scales into a comprehensive visual representation, enhancing model diagnostics. We demonstrate LossLens through two case studies: visualizing how residual connections influence a ResNet-20, and visualizing how physical parameters influence a physics-informed neural network (PINN) solving a simple convection problem.</p>","PeriodicalId":55026,"journal":{"name":"IEEE Computer Graphics and Applications","volume":"PP ","pages":""},"PeriodicalIF":1.7,"publicationDate":"2024-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143544540","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Riccardo Giussani, Nicolo Dozio, Stefano Rigone, Luca Parenzan, Francesco Ferrise
{"title":"Enhancing Virtual Reality Training through Artificial Intelligence: A Case Study.","authors":"Riccardo Giussani, Nicolo Dozio, Stefano Rigone, Luca Parenzan, Francesco Ferrise","doi":"10.1109/MCG.2024.3510857","DOIUrl":"https://doi.org/10.1109/MCG.2024.3510857","url":null,"abstract":"<p><p>Companies face the pressing challenge of bridging the skills gap caused by relentless technological progress and high worker turnover rates, necessitating continuous investment in up-skilling and re-skilling initiatives. Virtual reality is already used in training programs. However, despite its advantages, the authoring time for creating and customizing virtual environments often makes using this technology not completely convenient. The paper proposes an architecture that aims to facilitate the integration of artificial intelligence assistance into virtual reality training environments to improve user engagement and reduce authoring effort. The proposed architecture was tested in a study that compared a virtual training session with and without a digital assistant powered by artificial intelligence. Results indicate comparable levels of usability and perceived workload between the two conditions, with higher performance satisfaction in the assisted condition.</p>","PeriodicalId":55026,"journal":{"name":"IEEE Computer Graphics and Applications","volume":"PP ","pages":""},"PeriodicalIF":1.7,"publicationDate":"2024-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143544571","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}