IEEE Computer Graphics and Applications最新文献

筛选
英文 中文
Next Generation XR Systems-Large Language Models Meet Augmented and Virtual Reality.
IF 1.7 4区 计算机科学
IEEE Computer Graphics and Applications Pub Date : 2025-03-06 DOI: 10.1109/MCG.2025.3548554
Muhamamd Zeshan Afzal, Sk Aziz Ali, Didier Stricker, Peter Eisert, Anna Hilsmann, Daniel Perez-Marcos, Marco Bianchi, Sonia Crottaz-Herbette, Roberto De Ioris, Eleni Mangina, Mirco Sanguineti, Ander Salaberria, Oier Lopez de Lacalle, Aitor Garcia-Pablos, Montse Cuadros
{"title":"Next Generation XR Systems-Large Language Models Meet Augmented and Virtual Reality.","authors":"Muhamamd Zeshan Afzal, Sk Aziz Ali, Didier Stricker, Peter Eisert, Anna Hilsmann, Daniel Perez-Marcos, Marco Bianchi, Sonia Crottaz-Herbette, Roberto De Ioris, Eleni Mangina, Mirco Sanguineti, Ander Salaberria, Oier Lopez de Lacalle, Aitor Garcia-Pablos, Montse Cuadros","doi":"10.1109/MCG.2025.3548554","DOIUrl":"https://doi.org/10.1109/MCG.2025.3548554","url":null,"abstract":"<p><p>Extended Reality (XR) is evolving rapidly, offering new paradigms for humancomputer interaction. This position paper argues that integrating Large Language Models (LLMs) with XR systems represents a fundamental shift toward more intelligent, context-aware, and adaptive mixed-reality experiences. We propose a structured framework built on three key pillars: (1) Perception and Situational Awareness, (2) Knowledge Modeling and Reasoning, and (3) Visualization and Interaction. We believe leveraging LLMs within XR environments enables enhanced situational awareness, real-time knowledge retrieval, and dynamic user interaction, surpassing traditional XR capabilities. We highlight the potential of this integration in neurorehabilitation, safety training, and architectural design while underscoring ethical considerations such as privacy, transparency, and inclusivity. This vision aims to spark discussion and drive research toward more intelligent, human-centric XR systems.</p>","PeriodicalId":55026,"journal":{"name":"IEEE Computer Graphics and Applications","volume":"PP ","pages":""},"PeriodicalIF":1.7,"publicationDate":"2025-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143574654","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Towards Collective Storytelling: Investigating Audience Annotations in Data Visualizations.
IF 1.7 4区 计算机科学
IEEE Computer Graphics and Applications Pub Date : 2025-03-04 DOI: 10.1109/MCG.2025.3547944
Tobias Kauer, Marian Dork, Benjamin Bach
{"title":"Towards Collective Storytelling: Investigating Audience Annotations in Data Visualizations.","authors":"Tobias Kauer, Marian Dork, Benjamin Bach","doi":"10.1109/MCG.2025.3547944","DOIUrl":"https://doi.org/10.1109/MCG.2025.3547944","url":null,"abstract":"<p><p>This work investigates personal perspectives in visualization annotations as devices for collective data-driven storytelling. Inspired by existing efforts in critical cartography, we show how people share personal memories in a visualization of COVID-19 data and how comments by other visualization readers influence the reading and understanding of visualizations. Analyzing interaction logs, reader surveys, visualization annotations, and interviews, we find that reader annotations help other viewers relate to other people's stories and reflect on their own experiences. Further, we found that annotations embedded directly into the visualization can serve as social traces guiding through a visualization and help readers contextualize their own stories. With that, they supersede the attention paid to data encodings and become the main focal point of the visualization.</p>","PeriodicalId":55026,"journal":{"name":"IEEE Computer Graphics and Applications","volume":"PP ","pages":""},"PeriodicalIF":1.7,"publicationDate":"2025-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143558809","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Data-Inflected Visions of Feminicide.
IF 1.7 4区 计算机科学
IEEE Computer Graphics and Applications Pub Date : 2025-02-17 DOI: 10.1109/MCG.2025.3543025
Helena Surez Val
{"title":"Data-Inflected Visions of Feminicide.","authors":"Helena Surez Val","doi":"10.1109/MCG.2025.3543025","DOIUrl":"https://doi.org/10.1109/MCG.2025.3543025","url":null,"abstract":"<p><p>This paper advances the notion of \"data-inflected visions\" to show how various visual representations may come come to be imagined as data, and how doing so opens up different meanings for the political and affective work of data. The visuality of social issues is produced through competing hegemonic and alternative visions, and conventional visualization is not the only format in which data participate in visual contestation. Focusing on Latin American actions to visibilizar feminicide, I propose an encounter with activist-made imagery to elucidate how data participate in alternative representations of the issue. The article contributes both an exploration of the role of data in constructing how feminicide is seen and a novel approach to study data and visuality, to inspire scholars from visual studies and from feminist and critical data and data visualization studies to engage with images beyond conventional graphic representation as sites for the political affective work of data.</p>","PeriodicalId":55026,"journal":{"name":"IEEE Computer Graphics and Applications","volume":"PP ","pages":""},"PeriodicalIF":1.7,"publicationDate":"2025-02-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143558808","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Accelerate Cutting Tasks in Real-Time Interactive Cutting Simulation of Deformable Objects.
IF 1.7 4区 计算机科学
IEEE Computer Graphics and Applications Pub Date : 2025-02-05 DOI: 10.1109/MCG.2025.3538985
Shiyu Jia, Qian Dong, Zhenkuan Pan, Xiaokang Yu, Wenli Xiu, Jingli Zhang
{"title":"Accelerate Cutting Tasks in Real-Time Interactive Cutting Simulation of Deformable Objects.","authors":"Shiyu Jia, Qian Dong, Zhenkuan Pan, Xiaokang Yu, Wenli Xiu, Jingli Zhang","doi":"10.1109/MCG.2025.3538985","DOIUrl":"https://doi.org/10.1109/MCG.2025.3538985","url":null,"abstract":"<p><p>Simulation speed is crucial for virtual reality simulators involving real-time interactive cutting of deformable objects, such as surgical simulators. Previous efforts to accelerate these simulations resulted in significant speed increases during non-cutting periods, but only moderate ones during cutting periods. This paper aims to further increase the latter. Three novel methods are proposed: (1) GPU-based update of mass and stiffness matrices of composite finite elements. (2) GPU-based collision processing between cutting tools and deformable objects. (3) Redesigned CPU-GPU synchronization mechanisms combined with GPU acceleration for the update of the surface mesh. Simulation tests, including a complex hepatectomy simulation, are performed. Results show that our methods increase the simulation speed during cutting periods by 40.4% to 56.5.</p>","PeriodicalId":55026,"journal":{"name":"IEEE Computer Graphics and Applications","volume":"PP ","pages":""},"PeriodicalIF":1.7,"publicationDate":"2025-02-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143544547","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Augmented Reality Art Museum Mobile Guide for Enhancing User Experience.
IF 1.7 4区 计算机科学
IEEE Computer Graphics and Applications Pub Date : 2025-01-14 DOI: 10.1109/MCG.2025.3529981
Tung-Ju Hsieh, Yao-Hua Su, Li-Sen Lin
{"title":"Augmented Reality Art Museum Mobile Guide for Enhancing User Experience.","authors":"Tung-Ju Hsieh, Yao-Hua Su, Li-Sen Lin","doi":"10.1109/MCG.2025.3529981","DOIUrl":"https://doi.org/10.1109/MCG.2025.3529981","url":null,"abstract":"<p><p>The advancement of augmented reality technology provides means for next generation art museum tour guides. In this study, we develop an AR navigation and multimedia guide for mobile devices and head-mounted displays. Visitors follow virtual routes to the exhibits and switch to multimedia commentary mode, pointing to exhibits with their phone camera for commentary information. Two case studies are conducted to evaluate the proposed system. The results show that the proposed system outperforms the guided tour and brochure tour in enhancing the user experience and attracting visitors to explore the exhibitions. This research contributes to the field of AR technology and cultural heritage education by offering a mobile application for indoor navigation and artwork exploring, allowing art museum visitors to actively engage with the exhibits and enhance their understanding while maintaining their interests.</p>","PeriodicalId":55026,"journal":{"name":"IEEE Computer Graphics and Applications","volume":"PP ","pages":""},"PeriodicalIF":1.7,"publicationDate":"2025-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143544549","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Semi-Automated Pipeline for the Creation of Virtual Fitting Room Experiences Featuring Motion Capture and Cloth Simulation.
IF 1.7 4区 计算机科学
IEEE Computer Graphics and Applications Pub Date : 2024-12-24 DOI: 10.1109/MCG.2024.3521716
Alberto Cannavo, Giacomo Offre, Fabrizio Lamberti
{"title":"A Semi-Automated Pipeline for the Creation of Virtual Fitting Room Experiences Featuring Motion Capture and Cloth Simulation.","authors":"Alberto Cannavo, Giacomo Offre, Fabrizio Lamberti","doi":"10.1109/MCG.2024.3521716","DOIUrl":"https://doi.org/10.1109/MCG.2024.3521716","url":null,"abstract":"<p><p>Technological advancements are prompting the digitization of many industries, including fashion. Many brands are exploring ways to enhance customers' experience, e.g., offering new shopping-oriented services like Virtual Fitting Rooms (VFRs). However, there are still challenges that prevent customers from effectively using these tools for trying on digital garments. Challenges are associated with difficulties in obtaining high-fidelity reconstructions of body shapes and providing realistic visualizations of animated clothes following real-time customers' movements. This paper tackles such lacks by proposing a semi-automated pipeline supporting the creation of VFR experiences by exploiting state-of-the-art techniques for the accurate description and reconstruction of customers' 3D avatars, motion capture-based animation, as well as realistic garment design and simulation. A user study in which the resulting VFR experience was compared with those created with two existing tools showed the benefits of the devised solution in terms of usability, embodiment, model accuracy, perceived value, adoption and purchase intention.</p>","PeriodicalId":55026,"journal":{"name":"IEEE Computer Graphics and Applications","volume":"PP ","pages":""},"PeriodicalIF":1.7,"publicationDate":"2024-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143544570","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
LossLens: Diagnostics for Machine Learning Through Loss Landscape Visual Analytics.
IF 1.7 4区 计算机科学
IEEE Computer Graphics and Applications Pub Date : 2024-12-16 DOI: 10.1109/MCG.2024.3509374
Tiankai Xie, Jiaqing Chen, Yaoqing Yang, Caleb Geniesse, Ge Shi, Ajinkya Chaudhari, John Kevin Cava, Michael W Mahoney, Talita Perciano, Gunther H Weber, Ross Maciejewski
{"title":"LossLens: Diagnostics for Machine Learning Through Loss Landscape Visual Analytics.","authors":"Tiankai Xie, Jiaqing Chen, Yaoqing Yang, Caleb Geniesse, Ge Shi, Ajinkya Chaudhari, John Kevin Cava, Michael W Mahoney, Talita Perciano, Gunther H Weber, Ross Maciejewski","doi":"10.1109/MCG.2024.3509374","DOIUrl":"https://doi.org/10.1109/MCG.2024.3509374","url":null,"abstract":"<p><p>Modern machine learning often relies on optimizing a neural network's parameters using a loss function to learn complex features. Beyond training, examining the loss function with respect to a network's parameters (i.e., as a loss landscape) can reveal insights into the architecture and learning process. While the local structure of the loss landscape surrounding an individual solution can be characterized using a variety of approaches, the global structure of a loss landscape, which includes potentially many local minima corresponding to different solutions, remains far more difficult to conceptualize and visualize. To address this difficulty, we introduce LossLens, a visual analytics framework that explores loss landscapes at multiple scales. LossLens integrates metrics from global and local scales into a comprehensive visual representation, enhancing model diagnostics. We demonstrate LossLens through two case studies: visualizing how residual connections influence a ResNet-20, and visualizing how physical parameters influence a physics-informed neural network (PINN) solving a simple convection problem.</p>","PeriodicalId":55026,"journal":{"name":"IEEE Computer Graphics and Applications","volume":"PP ","pages":""},"PeriodicalIF":1.7,"publicationDate":"2024-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143544540","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancing Virtual Reality Training through Artificial Intelligence: A Case Study.
IF 1.7 4区 计算机科学
IEEE Computer Graphics and Applications Pub Date : 2024-12-04 DOI: 10.1109/MCG.2024.3510857
Riccardo Giussani, Nicolo Dozio, Stefano Rigone, Luca Parenzan, Francesco Ferrise
{"title":"Enhancing Virtual Reality Training through Artificial Intelligence: A Case Study.","authors":"Riccardo Giussani, Nicolo Dozio, Stefano Rigone, Luca Parenzan, Francesco Ferrise","doi":"10.1109/MCG.2024.3510857","DOIUrl":"https://doi.org/10.1109/MCG.2024.3510857","url":null,"abstract":"<p><p>Companies face the pressing challenge of bridging the skills gap caused by relentless technological progress and high worker turnover rates, necessitating continuous investment in up-skilling and re-skilling initiatives. Virtual reality is already used in training programs. However, despite its advantages, the authoring time for creating and customizing virtual environments often makes using this technology not completely convenient. The paper proposes an architecture that aims to facilitate the integration of artificial intelligence assistance into virtual reality training environments to improve user engagement and reduce authoring effort. The proposed architecture was tested in a study that compared a virtual training session with and without a digital assistant powered by artificial intelligence. Results indicate comparable levels of usability and perceived workload between the two conditions, with higher performance satisfaction in the assisted condition.</p>","PeriodicalId":55026,"journal":{"name":"IEEE Computer Graphics and Applications","volume":"PP ","pages":""},"PeriodicalIF":1.7,"publicationDate":"2024-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143544571","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Thirty Years of Applications.
IF 1.7 4区 计算机科学
IEEE Computer Graphics and Applications Pub Date : 2024-11-01 DOI: 10.1109/MCG.2024.3466548
Mike Potel, Mike Potel
{"title":"Thirty Years of Applications.","authors":"Mike Potel, Mike Potel","doi":"10.1109/MCG.2024.3466548","DOIUrl":"https://doi.org/10.1109/MCG.2024.3466548","url":null,"abstract":"<p><p>IEEE Computer Graphics and Applications began publishing \"Applications\" as a regular department under its present editor 30 years ago in November 1994, with the goal of featuring interesting examples of using computer graphics to solve real-world problems. The Applications department has appeared in every issue since, making the present article the 181st such article to appear. To mark this occasion, the Applications department takes a look back by revisiting the most cited articles that have appeared since the department's inception.</p>","PeriodicalId":55026,"journal":{"name":"IEEE Computer Graphics and Applications","volume":"44 6","pages":"52-60"},"PeriodicalIF":1.7,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143544543","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Situated Visualization in Motion.
IF 1.7 4区 计算机科学
IEEE Computer Graphics and Applications Pub Date : 2024-11-01 DOI: 10.1109/MCG.2024.3462129
Lijie Yao
{"title":"Situated Visualization in Motion.","authors":"Lijie Yao","doi":"10.1109/MCG.2024.3462129","DOIUrl":"https://doi.org/10.1109/MCG.2024.3462129","url":null,"abstract":"<p><p>We define visualization in motion and make several contributions to how to visualize and design situated visualizations in motion. In situated data visualization, the data are directly visualized near their data referent (i.e., the physical space, object, or person it refers to) (Bressa et al., 2022). Situated visualizations are often useful in contexts where the data referent or the viewer does not remain stationary but is in relative motion. For example, a runner looks at visualizations from their fitness band while running. Reading visualizations in such scenarios might be impacted by motion factors. As such, understanding how to best design visualizations with motion factors is important. We define visualizations in motion as visual data representations used in contexts that exhibit relative motion between a viewer and an entire visualization. We propose a research agenda to understand what research opportunities and challenges are under visualization in motion (Yao et al., 2022). Next, we investigate (a) how motion factors can affect the reading accuracy of visualizations (Yao et al., 2022), (b) how to design and embed visualizations in motion in a real application scenario (Yao et al., 2024), and (c) the user experience and design tradeoffs of visualization in motion through a case study (Yao et al., 2024).</p>","PeriodicalId":55026,"journal":{"name":"IEEE Computer Graphics and Applications","volume":"44 6","pages":"142-150"},"PeriodicalIF":1.7,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143544541","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信