Fabio Vangi, Daniel Medeiros, Mine Dastan, Michele Fiorentino
{"title":"MR-CoCo: an Open Mixed Reality Testbed for Co-located Couple Product Configuration and Decision-Making - A Sailboat Case Study.","authors":"Fabio Vangi, Daniel Medeiros, Mine Dastan, Michele Fiorentino","doi":"10.1109/TVCG.2025.3616734","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3616734","url":null,"abstract":"<p><p>The literature has demonstrated the advantages of Mixed Reality (MR) for product configuration by providing a more engaging and effective end-user experience. While collaborative and remote design tools in MR have been widely explored in previous studies, a noticeable gap remains in the exploration of co-located product configuration for couples. This gap is noteworthy since in many industries, couples (e.g., friends, partners) often make purchasing decisions together in physical retail environments. In this paper, we introduce MR-CoCo, an open MR testbed designed to explore collaborative configurations by co-located couples, both in the role of customers. The testbed is developed in Unity and features: (i) a shared MR space with virtual product 3D model anchoring, (ii) shared visualization of the current configuration, (iii) a versatile UI for selecting configuration areas, (iv) hand gestures for 3D drag and drop of colors and materials from 3D catalog to the product. A case study of the personalization of a sailboat is provided as proof of concept. The user study involved 24 couples (48 participants in total), simulating a purchasing experience and the related configuration using MR-CoCo. We assessed usability through post-experience evaluations, with the System Usability Scale (SUS) and the Co-Presence Configuration Questionnaire (CCQ) to measure collaboration and decision-making. The results demonstrated a high level of usability and perceived quality of collaboration. We also explore guidelines that can be used for remote collaboration applications, enabling configuration across a wide range of industries (e.g., automotive and clothing).</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":6.5,"publicationDate":"2025-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145214903","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Markus Wallinger, Tommaso Piselli, Alessandra Tappini, Daniel Archambault, Giuseppe Liotta, Martin Nollenburg
{"title":"Bundling-Aware Graph Drawing Revisited.","authors":"Markus Wallinger, Tommaso Piselli, Alessandra Tappini, Daniel Archambault, Giuseppe Liotta, Martin Nollenburg","doi":"10.1109/TVCG.2025.3616583","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3616583","url":null,"abstract":"<p><p>Edge bundling algorithms can significantly improve the visualization of dense graphs by identifying and bundling together suitable groups of edges and thus reducing visual clutter. As such, bundling is often viewed as a post-processing step applied to a drawing, and the vast majority of edge bundling algorithms consider a graph and its drawing as input. A different way of thinking about edge bundling is to simultaneously optimize both the drawing and the bundling, which we investigate in this paper. We build on an earlier work where we introduced a novel algorithmic framework for bundling-aware graph drawing consisting of three main steps, namely Filter for a skeleton subgraph, Draw the skeleton, and Bundle the remaining edges against the drawing of the skeleton. We propose several alternative implementations and experimentally compare them against each other and the simple idea of first drawing the full graph and subsequently applying edge bundling to it. The experiments confirm that bundled drawings created by our Filter-Draw-Bundle framework outperform previous approaches according to metrics for edge bundling and graph drawing.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":6.5,"publicationDate":"2025-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145214948","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jerome Kudnick, Martin Weier, Colin Groth, Biying Fu, Robin Horst
{"title":"WarpVision: Using Spatial Curvature to Guide Attention in Virtual Reality.","authors":"Jerome Kudnick, Martin Weier, Colin Groth, Biying Fu, Robin Horst","doi":"10.1109/TVCG.2025.3616806","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3616806","url":null,"abstract":"<p><p>With the advent of consumer-targeted, low-cost virtual reality devices and facile authoring technologies, the development and design of experiences in virtual reality are also becoming more accessible to non-expert authors. However, the inherent freedom of exploration in these virtual spaces presents a significant challenge for designers seeking to guide user attention toward points and objects of interest. This paper proposes the new technique WarpVision, which utilizes spatial curvature to subtly guide the user's attention in virtual reality. WarpVision distorts an area around the point of interest, thus changing the size, form, and location of all objects and the space around them. In this way, the user's attention can be guided even when the point of interest is not in the immediate field of vision. WarpVision is evaluated in a user study based on a within-subjects design, comparing it to the state-of-the-art technique Deadeye. Participants completed visual search tasks across two virtual environments being supported with WarpVision at four different intensities. Results show that WarpVision significantly reduces search times compared to Deadeye. While both techniques introduce comparable levels of immersion disruption, WarpVision has a lower reported impact on the user's well-being.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":6.5,"publicationDate":"2025-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145214996","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Christian Merz, Niklas Krome, Carolin Wienrich, Stefan Kopp, Marc Erich Latoschik
{"title":"The Impact of AI-Based Real-Time Gesture Generation and Immersion on the Perception of Others and Interaction Quality in Social XR.","authors":"Christian Merz, Niklas Krome, Carolin Wienrich, Stefan Kopp, Marc Erich Latoschik","doi":"10.1109/TVCG.2025.3616864","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3616864","url":null,"abstract":"<p><p>This study explores how people interact in dyadic social eXtended Reality (XR), focusing on two main factors: the animation type of a conversation partner's avatar and how immersed the user feels in the virtual environment. Specifically, we investigate how 1) idle behavior, 2) AI-generated gestures, and 3) motion-captured movements from a confederate (a controlled partner in the study) influence the quality of conversation and how that partner is perceived. We examined these effects in both symmetric interactions (where both participants use VR headsets and controllers) and asymmetric interactions (where one participant uses a desktop setup). We developed a social XR platform that supports asymmetric device configurations to provide varying levels of immersion. The platform also supports a modular avatar animation system providing idle behavior, real-time AI-generated co-speech gestures, and full-body motion capture. Using a 2×3 mixed design with 39 participants, we measured users' sense of spatial presence, their perception of the confederate, and the overall conversation quality. Our results show that users who were more immersed felt a stronger sense of presence and viewed their partner as more human-like and believable. Surprisingly, however, the type of avatar animation did not significantly affect conversation quality or how the partner was perceived. Participants often reported focusing more on what was said rather than how the avatar moved.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":6.5,"publicationDate":"2025-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145215025","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yannick Weiss, Julian Rasch, Jonas Fischer, Florian Muller
{"title":"Investigating the Effects of Haptic Illusions in Collaborative Virtual Reality.","authors":"Yannick Weiss, Julian Rasch, Jonas Fischer, Florian Muller","doi":"10.1109/TVCG.2025.3616760","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3616760","url":null,"abstract":"<p><p>Our sense of touch plays a crucial role in physical collaboration, yet rendering realistic haptic feedback in collaborative extended reality (XR) remains a challenge. Co-located XR systems predominantly rely on prefabricated passive props that provide high-fidelity interaction but offer limited adaptability. Haptic Illusions (HIs), which leverage multisensory integration, have proven effective in expanding haptic experiences in single-user contexts. However, their role in XR collaboration has not been explored. To examine the applicability of HIs in multi-user scenarios, we conducted an experimental user study (N=30) investigating their effect on a collaborative object handover task in virtual reality. We manipulated visual shape and size individually and analyzed their impact on users' performance, experience, and behavior. Results show that while participants adapted to the illusions by shifting sensory reliance and employing specific sensorimotor strategies, visuo-haptic mismatches reduced both performance and experience. Moreover, mismatched visualizations in asymmetric user roles negatively impacted performance. Drawing from these findings, we provide practical guidelines for incorporating HIs into collaborative XR, marking a first step toward richer haptic interactions in shared virtual spaces.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":6.5,"publicationDate":"2025-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145214965","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Radiance Fields in XR: A Survey on How Radiance Fields are Envisioned and Addressed for XR Research.","authors":"Ke Li, Mana Masuda, Susanne Schmidt, Shohei Mori","doi":"10.1109/TVCG.2025.3616794","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3616794","url":null,"abstract":"<p><p>The development of radiance fields (RF), such as 3D Gaussian Splatting (3DGS) and Neural Radiance Fields (NeRF), has revolutionized interactive photorealistic view synthesis and presents enormous opportunities for XR research and applications. However, despite the exponential growth of RF research, RF-related contributions to the XR community remain sparse. To better understand this research gap, we performed a systematic survey of current RF literature to analyze (i) how RF is envisioned for XR applications, (ii) how they have already been implemented, and (iii) the remaining research gaps. We collected 365 RF contributions related to XR from computer vision, computer graphics, robotics, multimedia, human-computer interaction, and XR communities, seeking to answer the above research questions. Among the 365 papers, we performed an analysis of 66 papers that already addressed a detailed aspect of RF research for XR. With this survey, we extended and positioned XR-specific RF research topics in the broader RF research field and provide a helpful resource for the XR community to navigate within the rapid development of RF research.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":6.5,"publicationDate":"2025-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145214968","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hyeongil Nam, Muskan Sarvesh, Seoyoung Kang, Woontack Woo, Kangsoo Kim
{"title":"Effects of AI-Powered Embodied Avatars on Communication Quality and Social Connection in Asynchronous Virtual Meetings.","authors":"Hyeongil Nam, Muskan Sarvesh, Seoyoung Kang, Woontack Woo, Kangsoo Kim","doi":"10.1109/TVCG.2025.3616761","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3616761","url":null,"abstract":"<p><p>Immersive technologies such as virtual and augmented reality (VR/AR) allow remote users to meet and interact in a shared virtual space using embodied virtual avatars, creating a sense of co-presence. However, asynchronous communication-essential in many real-world contexts-remains underexplored in these environments. Traditional playback-based systems lack interactivity and often fail to preserve critical contextual cues necessary for effective asynchronous communication. In this paper, we introduce AVAGENTs, AI-powered virtual avatars that replicate users' verbal and nonverbal cues from recordings of past meetings. Avagents can interpret meeting context and generate appropriate responses to questions posed by asynchronous viewers. Through a user study (N =30), we evaluated Avagents against a traditional playback method and a voice-based AI assistant across two asynchronous meeting scenarios: analytic reasoning and affective resonance. Results showed that Avagents enhance the asynchronous communication experience by increasing social presence, sense of belonging, emotional intimacy, and other user perceptions. We discuss the findings and their implications for designing effective AI-driven asynchronous communication tools in VR/AR environments.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":6.5,"publicationDate":"2025-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145214969","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sebastian Rigling, Steffen Koch, Dieter Schmalstieg, Bruce H Thomas, Michael Sedlmair
{"title":"Selection at a Distance through a Large Transparent Touch Screen.","authors":"Sebastian Rigling, Steffen Koch, Dieter Schmalstieg, Bruce H Thomas, Michael Sedlmair","doi":"10.1109/TVCG.2025.3616756","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3616756","url":null,"abstract":"<p><p>Large transparent touch screens (LTTS) have recently become commercially available. These displays have the potential for engaging Augmented Reality (AR) applications, especially in public and shared spaces. However, the interaction with objects in the real environment behind the display remains challenging: Users must combine pointing and touch input if they want to select objects at varying distances. There is a lot of work on wearable or mobile AR displays, but little on how users interact with LTTS. Our goal is to contribute to a better understanding of natural user interaction for these AR displays. To this end, we developed a prototype and evaluated different pointing techniques for selecting 12 physical targets behind an LTTS, with distances ranging from 6 to 401 cm. We conducted a user study with 16 participants and measured user preferences, performance, and behavior. We analyzed the change in accuracy depending on the target position and the selection technique used. Our fndings include: (a) Users naturally align the touch point with their line of sight for targets farther than 36 cm behind the LTTS. (b) This technique provides the lowest angular deviation compared to other techniques. (c) Some user close one eye to improve their performance. Our results help to improve future AR scenarios using LTTS systems.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":6.5,"publicationDate":"2025-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145214971","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Aymeric Henard, Etienne Peillard, Jeremy Riviere, Sebastien Kubicki, Gilles Coppin
{"title":"Towards Augmented Reality Support for Swarm Monitoring: Evaluating Visual Cues to Prevent Fragmentation.","authors":"Aymeric Henard, Etienne Peillard, Jeremy Riviere, Sebastien Kubicki, Gilles Coppin","doi":"10.1109/TVCG.2025.3616840","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3616840","url":null,"abstract":"<p><p>Swarm fragmentation, the breakdown of communication and coordination among robots, can critically compromise a swarm's mission. Integrating Augmented Reality support into swarm monitoring-especially through co-located visualisations anchored directly on the robots- may enable human operators to detect early signs of fragmentation and intervene effectively. In this work, we propose three localised visual cues-targeting robot connectivity, dominant decision influences, and movement direction-to make explicit the underlying Perception-Decision-Action (PDA) loop of each robot. Through an immersive Virtual Reality user study, 51 participants were tasked with both anticipating potential fragmentation and selecting the appropriate control to prevent it, while observing swarms exhibiting expansion, densification, flocking, and swarming behaviours. Our results reveal that a visualisation emphasising inter-robot connectivity significantly improves anticipation of fragmentation, though none of the cues consistently enhance control selection over a baseline condition. These findings underscore the potential of co-located AR-enhanced visual feedback to support human-swarm interaction and inform the design of future AR-based supervisory systems for robot swarms. A free copy of this paper and all supplemental materials are available at https://osf.io/49gny.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":6.5,"publicationDate":"2025-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145215017","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Spatiotemporal Calibration and Ground Truth Estimation for High-Precision SLAM Benchmarking in Extended Reality.","authors":"Zichao Shu, Shitao Bei, Lijun Li, Zetao Chen","doi":"10.1109/TVCG.2025.3616838","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3616838","url":null,"abstract":"<p><p>Simultaneous localization and mapping (SLAM) plays a fundamental role in extended reality (XR) applications. As the standards for immersion in XR continue to increase, the demands for SLAM benchmarking have become more stringent. Trajectory accuracy is the key metric, and marker-based optical motion capture (MoCap) systems are widely used to generate ground truth (GT) because of their drift-free and relatively accurate measurements. However, the precision of MoCap-based GT is limited by two factors: the spatiotemporal calibration with the device under test (DUT) and the inherent jitter in the MoCap measurements. These limitations hinder accurate SLAM benchmarking, particularly for key metrics like rotation error and inter-frame jitter, which are critical for immersive XR experiences. This paper presents a novel continuous-time maximum likelihood estimator to address these challenges. The proposed method integrates auxiliary inertial measurement unit (IMU) data to compensate for MoCap jitter. Additionally, a variable time synchronization method and a pose residual based on screw congruence constraints are proposed, enabling precise spatiotemporal calibration across multiple sensors and the DUT. Experimental results demonstrate that our approach outperforms existing methods, achieving the precision necessary for comprehensive benchmarking of state-of-the-art SLAM algorithms in XR applications. Furthermore, we thoroughly validate the practicality of our method by benchmarking several leading XR devices and open-source SLAM algorithms. The code is publicly available at https://github.com/ylab-xrpg/xr-hpgt.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":6.5,"publicationDate":"2025-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145215055","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}